SPRINGFIELD, Mass. (WGGB/WSHM) – Concerns are now arising, after a new bill passed last month by the House of Representatives is said to potentially impact millions of Americans enrolled in health insurance marketplaces, including right here in Massachusetts.
The recently passed reconciliation bill, combined with the impending expiration of federal tax credits happening at the end of the year, are likely to increase the uninsured rate and increase costs in communities not just here in Massachusetts, but across the country.
State Senator Ed Markey spoke out on the bill, saying it could put more than 300 rural hospitals across the U.S., which are already strained, at risk of closure, conversion, or service reductions. This is according to research data at the university of North Carolina at Chapel Hill.
Many Americans living in these rural areas, including here in our area, say they’re concerned the cuts will impact more than just hospitals. On Thursday, we spoke with Audrey Morse Gasteier, executive director for the Massachusetts Health Connector. She explained that the loss of coverage for millions of Americans could result in health care to being more expensive and harder to access, create a strain on health care systems, and hurt small businesses.
Legislators are also saying this bill could create longer wait times for emergency services when calling 9-1-1, or force patients to have to travel farther for care.
Copyright 2025. Western Mass News (WGGB/WSHM). All rights reserved.
Evaluating LLMs for Inference, or Lessons from Teaching for Machine Learning – Towards Data Science
The world’s leading publication for data science, AI, and ML professionals.
It’s like grading papers, but your student is an LLM
In it’s simplest form, the task of evaluating an LLM is actually very familiar to practitioners in the machine learning field — figure out what defines a successful response, and create a way to measure it quantitatively. However, there’s a wide variation in this task when the model is producing a number or a probability, versus when the model is producing a text.
For one thing, the interpretation of the output is significantly easier with a classification or regression task. For classification, your model is producing a probability of the outcome, and you determine the best threshold of that probability to define the difference between “yes” and “no”. Then, you measure things like accuracy, precision, and recall, which are extremely well established and well defined metrics. For regression, the target outcome is a number, so you can quantify the difference between the model’s predicted number and the target, with similarly well established metrics like RMSE or MSE.
But if you supply a prompt, and an LLM returns a passage of text, how do you define whether that returned passage constitutes a success, or measure how close that passage is to the desired result? What ideal are we comparing this result to, and what characteristics make it closer to the “truth”? While there is a general essence of “human text patterns” that it learns and attempts to replicate, that essence is vague and imprecise a lot of the time. In training, the LLM is being given guidance about general attributes and characteristics the responses should have, but there’s a significant amount of wiggle room in what those responses could look like without it being either negative or positive on the outcome’s scoring.
But if you supply a prompt, and an LLM returns a passage of text, how do you define whether that returned passage constitutes a success?
In classical machine learning, basically anything that changes about the output will take the result either closer to correct or further away. But an LLM can make changes that are neutral to the result’s acceptability to the human user. What does this mean for evaluation? It means we have to create our own standards and methods for defining performance quality.
Whether we are tuning LLMs or building applications using out of the box LLM APIs, we need to come to the problem with a clear idea of what separates an acceptable answer from a failure. It’s like mixing machine learning thinking with grading papers. Fortunately, as a former faculty member, I have experience with both to share.
I always approached grading papers with a rubric, to create as much standardization as possible, minimizing bias or arbitrariness I might be bringing to the effort. Before students began the assignment, I’d write a document describing what the key learning objectives were for the assignment, and explaining how I was going to measure whether mastery of these learning objectives was demonstrated. (I would share this with students before they began to write, for transparency.)
So, for a paper that was meant to analyze and critique a scientific research article (a real assignment I gave students in a research literacy course), these were the learning outcomes:
Then, for each of these areas, I created four levels of performance that range from 1 (minimal or no demonstration of the skill) to 4 (excellent mastery of the skill). The sum of these points then is the final score.
For example, the four levels for organized and clear writing are:
This approach is founded in a pedagogical strategy that educators are taught, to start from the desired outcome (student learning) and work backwards to the tasks, assessments, etc that can get you there.
You should be able to create something similar for the problem you are using an LLM to solve, perhaps using the prompt and generic guidelines. If you can’t determine what defines a successful answer, then I strongly suggest you consider whether an LLM is the right choice for this situation. Letting an LLM go into production without rigorous evaluation is exceedingly dangerous, and creates huge liability and risk to you and your organization. (In truth, even with that evaluation, there is still meaningful risk you’re taking on.)
If you can’t determine what defines a successful answer, then I strongly suggest you consider whether an LLM is the right choice for this situation.
If you have your evaluation criteria figured out, this may sound great, but let me tell you, even with a rubric, grading papers is arduous and extremely time consuming. I don’t want to spend all my time doing that for an LLM, and I bet you don’t either. The industry standard method for evaluating LLM performance these days is actually using other LLMs, sort of like as teaching assistants. (There’s also some mechanical assessment that we can do, like running spell-check on a student’s paper before you grade, and I discuss that below.)
This is the kind of evaluation I’ve been working on a lot in my day job lately. Using tools like DeepEval, we can pass the response from an LLM into a pipeline along with the rubric questions we want to ask (and levels for scoring if desired), structuring evaluation precisely according to the criteria that matter to us. (I personally have had good luck with DeepEval’s DAG framework.)
Now, even if we can employ an LLM for evaluation, it’s important to highlight things that the LLM can’t be expected to do or accurately assess, centrally the truthfulness or accuracy of facts. As I’ve been known to say often, LLMs have no framework for telling fact from fiction, they are only capable of understanding language in the abstract. You can ask an LLM if something is true, but you can’t trust the answer. It might accidentally get it right, but it’s equally possible the LLM will confidently tell you the opposite of the truth. Truth is a concept that is not trained into LLMs. So, if it’s crucial for your project that answers be factually accurate, you need to incorporate other tooling to generate the facts, such as RAG using curated, verified documents, but never rely on an LLM alone for this.
However, if you’ve got a task like document summarization, or something else that’s suitable for an LLM, this should give you a good technique to start your evaluation with.
If you’re like me, you may now think “ok, we can have an LLM evaluate how another LLM performs on certain tasks. But how do we know the teaching assistant LLM is any good? Do we need to evaluate that?” And this is a very sensible question — yes, you do need to evaluate that. My recommendation for this is to create some passages of “ground truth” answers that you have written by hand, yourself, to the specifications of your initial prompt, and create a validation dataset that way.
Just like with any other validation dataset, this needs to be somewhat sizable, and representative of what the model might encounter in the wild, so you can achieve confidence with your testing. It’s important to include different passages with different kinds of errors and mistakes that you are testing for — so, going back to the example above, some passages that are organized and clear, and some that aren’t, so you can be sure your evaluation model can tell the difference.
Fortunately, because in the evaluation pipeline we can assign quantification to the performance, we can test this in a much more traditional way, by running the evaluation and comparing to an answer key. This does mean that you have to spend some significant amount of time creating the validation data, but it’s better than grading all those answers from your production model yourself!
Besides these kinds of LLM based assessment, I am a big believer in building out additional tests that don’t rely on an LLM. For example, if I’m running prompts that ask an LLM to produce URLs to support its assertions, I know for a fact that LLMs hallucinate URLs all the time! Some percentage of all the URLs it gives me are certain to be fake. One simple strategy to measure this and try to mitigate it is to use regular expressions to scrape URLs from the output, and actually run a request to that URL to see what the response is. This won’t be completely sufficient, because the URL might not contain the desired information, but at least you can differentiate the URLs that are hallucinated from the ones that are real.
Ok, let’s take stock of where we are. We have our first LLM, which I’ll call “task LLM”, and our evaluator LLM, and we’ve created a rubric that the evaluator LLM will use to review the task LLM’s output.
We’ve also created a validation dataset that we can use to confirm that the evaluator LLM performs within acceptable bounds. But, we can actually also use validation data to assess the task LLM’s behavior.
One way of doing that is to get the output from the task LLM and ask the evaluator LLM to compare that output with a validation sample based on the same prompt. If your validation sample is meant to be high quality, ask if the task LLM results are of equal quality, or ask the evaluator LLM to describe the differences between the two (on the criteria you care about).
This can help you learn about flaws in the task LLM’s behavior, which could lead to ideas for prompt improvement, tightening instructions, or other ways to make things work better.
By now, you’ve got a pretty good idea what your LLM performance looks like. What if the task LLM sucks at the task? What if you’re getting terrible responses that don’t meet your criteria at all? Well, you have a few options.
There are lots of LLMs out there, so go try different ones if you’re concerned about the performance. They are not all the same, and some perform much better on certain tasks than others — the difference can be quite surprising. You might also discover that different agent pipeline tools would be useful as well. (Langchain has tons of integrations!)
Are you sure you’re giving the model enough information to know what you want from it? Investigate what exactly is being marked wrong by your evaluation LLM, and see if there are common themes. Making your prompt more specific, or adding additional context, or even adding example results, can all help with this kind of issue.
Finally, if no matter what you do, the model/s just cannot do the task, then it may be time to reconsider what you’re attempting to do here. Is there some way to split the task into smaller pieces, and implement an agent framework? Meaning, can you run several separate prompts and get the results all together and process them that way?
Also, don’t be afraid to consider that an LLM is simply the wrong tool to solve the problem you are facing. In my opinion, single LLMs are only useful for a relatively narrow set of problems relating to human language, although you can expand this usefulness somewhat by combining them with other applications in agents.
Once you’ve reached a point where you know how well the model can perform on a task, and that standard is sufficient for your project, you are not done! Don’t fool yourself into thinking you can just set it and forget it. Like with any machine learning model, continuous monitoring and evaluation is absolutely vital. Your evaluation LLM should be deployed alongside your task LLM in order to produce regular metrics about how well the task is being performed, in case something changes in your input data, and to give you visibility into what, if any, unusual and rare mistakes the LLM might make.
Once we get to the end here, I want to emphasize the point I made earlier — consider whether the LLM is the solution to the problem you’re working on, and make sure you are using only what’s really going to be beneficial. It’s easy to get into a place where you have a hammer and every problem looks like a nail, especially at a moment like this where LLMs and “AI” are everywhere. However, if you actually take the evaluation problem seriously and test your use case, it’s often going to clarify whether the LLM is going to be able to help or not. As I’ve described in other articles, using LLM technology has a massive environmental and social cost, so we all have to consider the tradeoffs that come with using this tool in our work. There are reasonable applications, but we also should remain realistic about the externalities. Good luck!
Read more of my work at www.stephaniekirmer.com
https://deepeval.com/docs/metrics-dag
https://python.langchain.com/docs/integrations/providers
Written By
Topics:
Share this article:
A deep dive on the meaning of understanding and how it applies to LLMs
Our weekly selection of must-read Editors’ Picks and original features
Explore the nuances of the transformer architecture behind Llama 3 and its prospects for the…
Explore the details behind the power of transformers
Explore the secret behind Sora’s state-of-the-art videos
How could AI suggest your majors?
You may choose suboptimal prompts for your LLM (or make other suboptimal choices via model…
Your home for data science and Al. The world’s leading publication for data science, data analytics, data engineering, machine learning, and artificial intelligence professionals.
Leave a Comment
Sensex Today | Stock Market LIVE Updates: Nifty remains below 24,700; Eternal shares dip over 5% in a week – CNBC TV18
You can follow us on Twitter: @CNBCTV18Live @CNBCTV18News
And on Facebook, LinkedIn, Instagram and Telegram
MARKET THIS WEEK
Market Gives Up Gains Of Last Week, Sensex & Nifty Down 1% Each
Broader Mkts Underperform With Nifty Bank & Midcap Index falling 2% Each
Pharma & IT Gain The Most This Week With Sectoral Indices Up 2-3%
35 Nifty Stocks Give Negative Returns, Eternal, Adani Ports, Tata Steel Top Losers
Tech Mah, ONGC, Wipro, Grasim, HCLTech, DRL Are Top Nifty Gainers
Capital Market Plays Are Amongst Top Midcap Losers This Week
Angel One, USL, Paytm, BSE, CDSAL & CAM Are Amongst Top Midcap Losers
MARKET AT CLOSE
Market Closes In The Red But Off Lows Tracking Geopolitical Tensions
Nifty Holds On To Key Support Levels, Ends The Session Above 24,700
Financials Underperform With HDFC Bk, Kotak & IndusInd Dragging Nifty bk
Sensex Falls 573 Points To 81,119 & Nifty 170 Points To 24,719
Nifty Bank Slips 555 Points To 55,527 & Midcap Index 213 Points To 58,227
Defence Names See Buying As Geopolitical Tensions Rise, BEL, HAL Up 2% Each
Gold Financiers Gain Tracking +ve Move In Precious Metals, Manappuram Up 3%
Upstream Oil Cos Rise While Downstream Cos Slip As Crude Surges
Zee Ent Extends Gains Ahead Of June 16 Board Meet, Up 9% This Week
Jubilant Food & Jubilant Ingrevia Surge while Jubi Pharmova Slip Post Block Deals
IndiGo Slips 4% On Reports Of Promoter Stake Sale & Crude Gain
Capital Market Plays Cotninue To See Downmove, Angel One & CAMS Top Losers
IREDA, Canara, PNB Hsg, Granules, NMDC, Union Bank Are Top Midcap Losers
Market Breadth Favours Declines, Advance-Decline Ratio At 1:2
“It is some challenging times that market is witnessing right now. We saw a gap down opening, and right now we are seeing a bounce. So unless and until we see a Nifty closing about 24,900 levels, I think the near term trend is damaged. If you see a close now below 24,400 or whenever that happens, that would trigger further sell off coming towards 23,500 to start with.
Likewise even Bank Nifty, 57,000-57,500 our technical target was met with. Right now it is showing some signs of weakness and in sync with Nifty if 55,000 level breaks, then we see a further down slide.”
Sammaan Capital has announced an offer to all debenture holders for the premature redemption of its non-convertible debentures (NCDs), citing robust domestic debt capital flows following recent policy measures by the Reserve Bank of India. The company said the bonds eligible for early redemption include those maturing up to September 2025, with an outstanding value of nearly ₹2,500 crore.
Rare earth magnet supply remains a key concern for VE Commercial Vehicles (VECV), according to Vinod Aggarwal, Managing Director and CEO. Read more here
#FromMoneycontrol | Hind Zinc in focus:Govt may delay co’s stake sale amid strong dividend inflows
Centre at a later stage may explore a staggered approach for stake sale pic.twitter.com/R5WBQFV20L
— CNBC-TV18 (@CNBCTV18Live) June 13, 2025
Read more here
Nomura’s Robert Subbaraman expects the Federal Reserve to hold off on rate cuts until December, despite May’s softer inflation data. The CPI rose only 0.1% month-on-month, below expectations, bringing annual inflation to 2.4%. Read more here
Market Watch: Hemen Kapadia of DRChoksey Finserv
Buy Alkem Laboratories for a target price of ₹4,980 with a stop loss of ₹4,680
Buy Apollo Hospitals for a target price of ₹7,300 with a stop loss of ₹6,850
Kernex Microsystems joint venture (JV) gets Letter of Acceptance (LoA) for Kavach order worth ₹311 crore from Southern Railways, Chennai.
15.42 lakh shares (1.77% eq) worth ₹190 crore change hands at ₹1,227.50 per share.
Now at a record high, up over 2% for the 7th straight session.
#CNBCTV18Market | Brent off day’s high, back below $74/bbl pic.twitter.com/tUYpIHpTLA
— CNBC-TV18 (@CNBCTV18Live) June 13, 2025
Union Finance Minister Nirmala Sitharaman is likely to chair a performance review meeting with the heads of public sector banks (PSBs) on June 27, sources said. This development follows the 29th Financial Stability and Development Council (FSDC) meeting held in Mumbai on June 10. At the FSDC meeting, Sitharaman pushed for a more citizen-friendly financial system. Read more here
Ajit Banerjee, President & CIO, Shriram Life Insurance
Auto Sector
“Our house view is a bit pessimistic at this point in time because there are headwinds in terms of input cost, as well as even for the EV sector, there is uncertainty about rare earth magnets, plus export opportunities are also getting limited.
So, there would be, near term, some amount of not so favourable terms for the sector. So that’s the reason one needs to be avoiding it for any fresh exposure or rather cut short the position also if need arises.”
PSU Sector
“The first six months were doing well last year, and then the next six months it fell out of favour, and then again after the geopolitical crisis in our country and some of the companies, their products have been demonstrated. It was almost like a practical application of their products, so that is the reason it has ignited a lot of interest now and there is merit in their capacities as well as their quality.
So, that is something, once again, that has brought it back to the limelight. And I guess that is going to be because any strong nation needs to fortify itself adequately, and India has shown that resilience, and that is what actually is helping the sector pretty well.”
IndiGo in focus, down nearly 5% after sources say promoter stake sale likely
Read more here
Read more here
#OnCNBCTV18 | Do not see any change in fuel retail prices till crude remains below $80/bbl. Recent crude surge will have an impact on the #auto marketing margin
Crude in $60-70/bbl range is ideal for the Indian market, says MK Surana, Former CMD of #HPCL to CNBC-TV18 pic.twitter.com/6t25iNgupg
— CNBC-TV18 (@CNBCTV18Live) June 13, 2025
Shares of L&T Finance Ltd. on Friday, June 13, declined after brokerage firm UBS downgraded its rating but raised its price target by 18.6%. UBS has downgraded its rating on the stock to “neutral” from its earlier rating of “buy” but has raised its price target from ₹177 to ₹210 apiece, implying an upside potential of 16.3% from Thursday’s closing price. Read more here
India’s defence companies are bucking the trend in what has been an otherwise sombre day for India’s equity markets. The deteriorating geopolitical scenario between Israel and Iran has led to a surge in shares of Defence companies. Read more here
#OnCNBCTV18 | #QuantMF, #Blackrock, #3P participated in QIP issue. Annual spend on #technology is around ₹90 cr
Will grow AUM to ₹50,000 cr in 3 years. Won’t see a negative surprise on the asset quality front
Rajesh Sharma, #CapriGlobal on CNBC-TV18 pic.twitter.com/M620VjYBAA
— CNBC-TV18 (@CNBCTV18Live) June 13, 2025
Market Watch: Manas Jaiswal, Technical Analyst at manasjaiswal.com
Buy Manappuram Finance for a target price of ₹285 with a stop loss of ₹269
Buy Oracle Financial Services Software for a target price of ₹9,900 with a stop loss of ₹9,450
Pranav Gundlapalle, Sr Analyst-India Financials, Bernstein On CNBC-TV18
Abizer Diwanji, Founder, NeoStrat Advisors LLP On CNBC-TV18
#CNBCTV18Market | IT stocks recover, #Coforge now up more than 1% pic.twitter.com/QZCylFUBKX
— CNBC-TV18 (@CNBCTV18Live) June 13, 2025
Shares of Shipping Corporation of India Ltd. (SCI) and GE Shipping Ltd. gained 5% and 3%, respectively, on Friday, June 13, bucking the trend in an otherwise weak market. The stocks are among the top and a few gainers on the Nifty 500 index today. Stocks have surged after Israel launched precise and pre-emptive attacks on Iran’s nuclear program in the early hours of Friday, resulting in the deaths of many commanders and scientists, a news confirmed by Iran’s Supreme Leader Ayatollah Khamenei. Read more here
The Indian rupee breached the 86 mark against the US dollar in early trade on Friday (June 13) after Israel launched strikes on Iran, escalating fears of a wider Middle East conflict. The rupee opened at 86.14 per dollar, down 55 paise from Thursday’s (June 12) close of 85.60. This marks the sharpest single-day decline in recent weeks. Read more here
News
Live TV
Market
Popular Categories
Calculators
Trending Now
Let's Connect with CNBCTV 18
Network 18 Group :
©TV18 Broadcast Limited. All rights reserved.
Leave a Comment
Leave a Comment
Gov. Phil Scott signs into law 2 bills to address Vermont’s high health care costs – VTDigger
We’re hiring a reporter to cover Vermont’s affordability crisis. With support from Report for America and a dollar-for-dollar match from a local donor, your gift will go twice as far through Saturday.
We’re hiring a reporter to cover Vermont’s affordability crisis. With support from Report for America and a dollar-for-dollar match from a local donor, your gift will go twice as far through Saturday.
VTDigger
News in pursuit of truth
Gov. Phil Scott this week signed two significant pieces of health care legislation into law, both of which seek to rein in health care costs while bolstering state oversight of hospital practices.
One bill, H.266, signed on Wednesday, limits the amount that Vermont health care providers can charge for outpatient prescription drugs — medications administered by injection or IV that are often used to treat cancers and autoimmune diseases.
Another, S.126, signed Thursday, aims at a more comprehensive, long-term transformation of health care regulation in the state. Among other items, the bill requires state health care officials to develop a “statewide health care delivery plan” and present it to the Legislature by 2028. The legislation also directs the Green Mountain Care Board to implement reference-based pricing, a system that tethers the prices that health care providers charge to the equivalent rates that Medicare allows.
Taken together, the two pieces of legislation represent a major effort by Vermont lawmakers and officials to curb health care costs while bolstering oversight of hospitals.
The passage of the bills comes at a time when many health care entities in the state are struggling to stay afloat while patients and employers are facing skyrocketing insurance premiums.
“We have no choice,” Mike Fisher, Vermont’s chief health care advocate, said Thursday. “There’s a substantial risk that we’re going to lose key providers in communities around the state if we don’t intervene.”
Among the main contributors to ballooning health care costs in Vermont are the extreme markups that hospitals charge for certain drugs, lawmakers and officials have said.
Currently, the average price of outpatient pharmaceuticals in the state are more than five times the amount of the manufacturer’s average sales price, by far the highest average markup in the nation, according to data compiled by the research and consulting firm RAND.
H.266 caps the cost of those drugs at 120% of their manufacturers’ average sale price beginning in January 2026, a move that health officials say will go a long way toward immediately lowering Vermont’s rising insurance premiums and health care expenditures.
Under the new cap, outpatient drug prices at Vermont hospitals would be the lowest in the nation, according to preliminary estimates. Hospitals in the state designated federally as “critical access hospitals” are exempt from the cap if they are not affiliated with a larger hospital network. That group of six hospitals not covered by the cap includes Copley Hospital in Morrisville, Gifford Medical Center in Randolph and North Country Hospital in Newport.
“It is the most consequential and immediate effort I have heard of to reduce health care costs in the state,” Owen Foster, chair of the Green Mountain Care Board, said Thursday.
Blue Cross Blue Shield of Vermont, the state’s largest health insurance provider, has already said the measure would reduce the insurer’s projected rate of premium increases for next year by an estimated four percentage points for plans offered on Vermont Health Connect, the state’s Affordable Care Act marketplace, and by three percentage points for public school employees.
But advocates for the state’s hospitals argue that the proposal would take away millions of dollars of revenue, requiring some health care providers to drastically tighten their belts and potentially cut staff and services.
“As we do this work, we’ll first ensure we look to administrative and other savings to limit and avoid, to the fullest extent possible, impacts on direct patient care and services,” Devon Green, a lobbyist for the Vermont Association of Hospitals and Health Care Systems, said in a written statement. “We know this will be challenging, but if we work with our state partners, the GMCB and together as hospitals, we are confident we can make meaningful progress.”
Lawmakers and officials are looking to rein in hospital prices in the longer term with S.126, which requires the Green Mountain Care Board to establish a reference-based pricing system for the state’s hospitals by 2027.
Under the proposed model, the Green Mountain Care Board will limit the amount that hospitals charge private insurance companies for patient procedures by pegging those prices to the equivalent rates that Medicare sets for hospitals.
State officials have long trumpeted reference-based pricing as a means of clamping down on rising health care costs.
A 2024 report produced by the care board found that implementing the cost saving system just for state and school employee insurance plans could save the state tens of millions of dollars annually. Doug Hoffer, the state auditor, similarly touted reference-based pricing as a cost-saving measure for the state’s health care system in a 2021 report.
In practice, the measure represents a seismic shift in the way that hospitals price patient care, and it remains unclear exactly how the pricing system would work.
“There’s a significant amount of work to do to come up with a payment methodology for reference-based pricing,” Foster said. “All of these things are going to take a fair amount of time and effort to get it right, and so that’s what we’re initiating.”
S.126 also tasks state officials with transitioning the state’s health care system to a “global budget” payment model by 2030, meaning participating hospitals would receive fixed amounts of money from participating insurers to operate within a given year rather than receiving separate payments for individual procedures.
Vermont already took steps in the direction of establishing a global budget model when, earlier this year, it signed onto a pilot program run by the Centers for Medicare and Medicaid Services called the AHEAD model. The pilot program, if it moves forward, would allow the state to incorporate federally funded insurers into the payment system.
Additionally, the bill would give state regulators more general oversight of health care providers, allowing the Green Mountain Care Board to collect more data and financial information from hospitals for the sake of standardizing pricing and budgets.
Last week, Scott also signed into law H.482, legislation that gives the Green Mountain Care Board emergency authority to reduce hospital prices in the case of risks of insolvency to insurers.
“It just provides a layer of protection for the entire system,” Foster said. “I see that as a critical step in the situation that we’re in, with financial concerns at our primary insurer.”
Vermont’s affordability crisis affects us all — but not in the same way. That’s why VTDigger is launching a new reporting beat focused on wealth, poverty and economic opportunity across the state.
Report for America will partially fund this new role, but they require the remainder come from our community. A generous Vermont donor has stepped up to MATCH ALL GIFTS received by Saturday, June 14.
Will you help us launch this critical reporting position? No gift is too big or too small — and today, it will be doubled.
Sincerely,
Neal Goswami, Managing Editor, VTDigger
Vermont’s newsletter
Request a correction
Submit a tip
VTDigger's business and general assignment reporter. More by Habib Sabet
Leave a Comment
Vipin Khuttel Leading the Way in Digital Marketing Training & Thought Leadership – openPR.com
How Vipin Khuttel is Shaping India’s Digital Future Through Education & Innovation
Permanent link to this press release:
Leave a Comment
Leave a Comment
Boko Haram kidnaps Nigerian Priest near Cameroon border – africanews.com
Please select your experience
Copy/paste the script below
OTHER WIDGETS
with AP
A Nigerian Catholic priest who recently served in the United States has been abducted by extremists along with other travellers in northeast Nigeria’s Borno state, the church said.
The Rev. Alphonsus Afina was kidnapped on June 1 near the northeastern town of Gwoza, close to the border with Cameroon, by the Islamic extremist group Boko Haram, Bishop John Bogna Bakeni of Maiduguri told The Associated Press on Sunday.
Bakeni said that he spoke with the priest over the phone a day after the abduction. Afina, though exhausted from trekking, was “sounding OK” and “in good spirits” during the brief conversation, according to the bishop.
The priest was traveling from the city of Mubi, where he is based, to Maidiguri, the capital of Borno, for a workshop when his convoy was ambushed by armed men while waiting for clearance at a military checkpoint, he said.
A rocket-propelled grenade hit one of the vehicles, killing one person and wounding others, according to the bishop.
Bakeni said it was difficult to determine if the priest was specifically targeted, given the number of travelers caught in the ambush. Other travelers were also abducted, he said, although it was unclear how many.
Nigerian authorities haven’t publicly commented on the abductions and didn’t respond to requests for comment.
Rev. Robert Fath, the vicar general of the diocese of Fairbanks, Alaska, told the Anchorage Daily News on Thursday that he had received a phone call from Boko Haram confirming they had Afina.
Afina served in Alaska from 2017 to 2024 before returning to Nigeria, where he works with the Justice, Development and Peace Commission, a Catholic social justice group.
Nigerian authorities are struggling to stem rising violence in the north and central regions where armed groups, including Boko Haram, target rural communities, killing thousands and abducting people to ransom.
The attacks sometimes target religious figures such as clerics. In March, a priest in central Nigeria was kidnapped and killed by unidentified armed men.
Boko Haram, Nigeria’s homegrown jihadis, took up arms in 2009 to fight Western education and impose their radical version of Islamic law. The conflict has spilled into Nigeria’s northern neighbors and resulted in the death of around 35,000 civilians and the displacement of more than 2 million others, according to the United Nations.