HILO, Hawai‘i — The state of Hawaii is preparing for the advent of artificial intelligence through a pair of proposed laws on the verge of passage.
While several bills relating to the rapidly advancing technology were introduced this year in the state Legislature, only two have successfully passed through both chambers. One of the measures would establish regulations aimed at preventing the use of AI to spread misinformation, while the other would develop a potentially life-saving program to predict wildfires.
The latter, Senate Bill 2284, would establish a two-year program at the University of Hawaii to develop an AI-driven wildfire forecast system, allocating $1 million to the university to develop it.
The project’s principal investigator, UH Manoa professor Sayed Bateni, said predicting wildfires is difficult at best because of the extremely intricate network of environmental factors that cause them.
“There is a very complex nonlinear relationship between so many different climate factors,” Bateni told the Tribune-Herald on Tuesday. “We would need machine learning to figure out that relationship.”
Machine learning models have been used to forecast other climate phenomena, Bateni said, but not wildfires, because of that aforementioned complexity.
Bateni said that by plugging in existing climate data into a machine learning model, the model should eventually be able to predict the likelihood of a wildfire more accurately than traditional forecasting. The model’s output could then be incorporated into the National Weather Service’s Red Flag Warning system, which warns of fire risks during periods of warm temperatures, low humidities and high winds.
While the bill makes specific mention of the deadly 2023 Lahaina wildfire, Bateni said the model will be used to predict wildfires throughout the state — a daunting task, given the diverse and varying microclimates on each island.
Bateni said the project should be operable by the end of the two-year program, although the bill also requires that UH submit a report to the Legislature about the project’s effectiveness by 2026.
The other bill takes a more cynical perspective toward AI. Senate Bill 2687 would prohibit the distribution of “materially deceptive media” that could tarnish or otherwise impact the reputation of an electoral candidate during election years — specifically, the period between the first working day of February in an even-numbered year and the next general election.
People misrepresented by such media would be able to sue distributors for damages, and could have a court issue a temporary or permanent injunction against those distributors.
Certain circumstances could lead to criminal charges, however. Distributors of materially deceptive media would be required to include citations listing the source of the material used to generate the final product. Failing to do so would be a petty misdemeanor offense, but doing so again within five years would be a regular misdemeanor, and doing so with intent to cause violence would be guilty of a Class C felony, punishable by up to five years in prison and $10,000 in fines.
However, the measure has specific definitions about what “materially deceptive media” actually is. Specifically, it is an advertisement generated by AI or related technologies, including generative or deep learning neural networks, depicting an individual saying or doing things the person did not actually say or do that could cause a reasonable viewer to believe the depicted person actually did say or do those things.
The bill does not flatly prohibit such material to be shared, instead establishing several criteria for how disclaimers should be displayed for varying types of media: clearly and constantly visible text for a video or an image, and clearly spoken disclaimers at the beginning of an audio file.
Meanwhile, broadcasters and cable operators would not be prohibited from distributing the offending material unless they were involved in its creation, while streaming services also would be exempt unless they had knowledge the content was deceptive and intends to deceive a resident of the state.
The measure has been controversial. While state agencies and consumer groups have generally supported the intent of the bill, dozens of private citizens have testified against it, calling it a violation of the First Amendment.
“We should not be allowing any persons or organizations to determine if our media materials or comments are deceptive,” wrote Hilo resident Joy Dillon. “No one has that right. We each have the right to decide for ourselves. … (This bill) will have disastrous results that do not adhere to our democratic republic principles.”
Most detractors agreed the language in the bill is vague and could be interpreted selectively to suit any chosen agenda.
Both bills were referred to Gov. Josh Green’s desk on May 2 and May 3 and await his signature. He has 45 days from that referral to either sign or veto the bills; failing to do either will pass them into law by default.
AI has been a controversial technology in recent years, as consumer-available generative AI models have grown more sophisticated and convincing. Even as major businesses make moves to embrace the technology — Google has recently incorporated AI-generated results into its search engine — Avril Haines, U.S. director of National Intelligence warned Congress earlier this month that convincing “deepfake” images pose a destabilizing threat to the nation’s election security.
Email Michael Brestovansky at mbrestovansky@hawaiitribune-herald.com.