ReadCtrl: Personalizing text generation with readability-controlled instruction learning (2024)

Hieu Tran 1, Zonghai Yao 11footnotemark: 1 1, Lingxi Li 1, Hong Yu1,2,3,4
1 Manning College of Information and Computer Sciences, University of Massachusetts Amherst, MA, USA
2 Department of Medicine, University of Massachusetts Medical School, Worcester, MA, USA
3 Miner School of Computer and Information Sciences, University of Massachusetts Lowell, MA, USA
4 Center for Healthcare Organization and Implementation Research, VA Bedford Health Care, MA, USA
{hieutran, zonghaiyao}@umass.edu
* indicates equal contribution

Abstract

Content generation conditioning on users’ readability is an important application for personalization.In an era of large language models (LLMs), readability-controlled text generation based on LLMs has become increasingly important.This paper introduces a novel methodology called “Readability-Controlled Instruction Learning (ReadCtrl),”which aims to instruction-tune LLMs to tailor users’ readability levels.Unlike the traditional methods, which primarily focused on categorical readability adjustments—typically classified as high, medium, and low or expert and layperson levels—with limited success, ReadCtrl introduces a dynamic framework that enables LLMs to generate content at various (near continous level) complexity levels, thereby enhancing their versatility across different applications.Our results show that the ReadCtrl-Mistral-7b models significantly outperformed strong baseline models such as GPT-4 and Claude-3, with a win rate of 52.1%:35.7% against GPT-4 in human evaluations.Furthermore, ReadCtrl has shown significant improvements in automatic evaluations, as evidenced by better readability metrics (e.g., FOG, FKGL) and generation quality metrics (e.g., BLEU, SARI, SummaC-Factuality, UniEval-Consistency and Coherence).These results underscore ReadCtrl’s effectiveness and tenacity in producing high-quality, contextually appropriate outputs that closely align with targeted readability levels, marking a significant advancement in personalized content generation using LLMs.111Our code and data will be released at https://github.com/bio-nlp/ReadCtrl.

1 Introduction

ReadCtrl: Personalizing text generation with readability-controlled instruction learning (1)

Existing personalization methods mainly focus on the semantics of the content that individuals need, such as retrieving information based on individuals’ search queriesChen etal. (2023); Kirk etal. (2024); Shanahan etal. (2023) and summarization based on content representationRichardson etal. (2023).However, one important aspect of personalization that has not been widely explored is readability-controlled content generationVajjala (2021).This involves tailoring content to match individuals’ readability levels, which can vary widely due to differences in education and related experiences and trainingRibeiro etal. (2023).The emergence of large language models (LLMs) has further advanced this field, ushering in a transformative era of automatic content generationPu and Demberg (2023).It is crucial for content generated by these models to be accurate, relevant, and consistent with the cognitive abilities of the target audience.The emphasis on customized content creation underscores the critical role of customization and personalization in digital interactions, especially in environments where explicit instructions (typically from LLMs’ users) must be strictly followedZhou etal. (2023); Sun etal. (2023); Qin etal. (2024).At the heart of this innovative area are the principles of readability control instructions, designed to dynamically adapt the output vocabulary distribution to the specific context of each interaction.This can be achieved by analyzing chat history, interpreting user profiles, or responding to direct interaction requests, significantly enhancing LLMs’ versatilityHuang etal. (2023).

Previous efforts in the domain of controllable text generation have primarily centered on binary readability adjustments, typically categorized into tasks of simplification, complication, or sibling style transferGoldsack etal. (2022); Guo etal. (2021); Luo etal. (2022).Despite their objectives, these approaches often fail to fully address the diverse personalization needs due to the limited variety in training data and a concentrated emphasis on readability.In traditional supervised fine-tuning scenarios, designing multiple readability-level ground truths for training data to facilitate readability control is not scalable.As a result, models may not have sufficient exposure to varied text complexities, limiting their ability to adjust content according to user-specific readability needs accurately.In response, the field of artificial intelligence is shifting towards more dynamic systems that can adapt outputs to meet users’ unique preferences and requirementsKirk etal. (2024).This shift is heralding a new era of personalized content creation that extends beyond standard domains like information retrieval to specialized areas, enhancing user engagement and satisfaction through tailored content.

This paper addresses these challenges by introducing a novel methodology termed “readability-controlled instruction learning (ReadCtrl).”Our findings demonstrate that ReadCtrl can empower LLMs to transform input text into content with controlled readability accurately.As illustrated in Figure1, our investigation across a range of state-of-the-art LLMs shows varying degrees of compliance with readability-controlled instructions.Mainstream models like GPTAchiam etal. (2023) and ClaudeAnthropic (2024), despite demonstrating an Upward trend, fall significantly short of the ideal ‘perfect’ adherence curve—they show a tendency towards compliance but lack precise output control.In contrast, models that previously struggled with readability control, such as Mistral-7bJiang etal. (2023)—illustrated almost as a horizontal line in the figure—have shown significant enhancement with the integration of ReadCtrl, such as Mistral-ReadCtrl.These models now nearly meet the ideal performance curve, showcasing their improved ability to customize outputs to specific readability demands.Specifically, ReadCtrl incorporates explicit instruction tuning based on readability and is rigorously tested through tasks designed to evaluate the model’s ability to adjust output complexity.These tasks include text simplification, which aims to reduce the output’s readability relative to the input; paraphrase generation, which maintains the input’s readability; and semantic entailment generation, which adjusts the output’s readability, potentially increasing or decreasing it in relation to the input.By employing a clear instruction—“Given an input text, please output an entailment with a readability score around {target readability score}”—models like Mistral-ReadCtrl demonstrate the efficacy of ReadCtrl in generating content that is not only semantically coherent but also finely adjusted to meet diverse comprehension needs and preferences.

In our initial experiments, we evaluated the model’s performance in a "seen setting," where models were tested using datasets on which they were trained, such as ASSETAlva-Manchego etal. (2020) for text simplification, PAWSZhang etal. (2019) for paraphrase generation, and SNLIBowman etal. (2015) for semantic entailment. This setting established a baseline for instruction tuning, confirming that the models could effectively adhere to readability instructions in familiar contexts. Subsequent experiments involved an "unseen setting," testing the models against new datasets they had not encountered during training, such as WikiSmallZhu etal. (2010) for text simplification, MRPCDolan and Brockett (2005) for paraphrase generation, and MultiNLIWilliams etal. (2017) for semantic entailment. This phase was critical for assessing the models’ adaptability and accuracy in novel contexts, reflecting their generalizability and real-world applicability.We utilized several readability assessment metrics, including the Gunning Fog IndexGunning (1952) and Flesch-Kincaid Grade LevelKincaid etal. (1975), to quantitatively evaluate the complexity of the generated texts. Additionally, we employed a range of automatic generation metrics for generation quality evaluation, such as BLEUPapineni etal. (2002), SARIXu etal. (2016), FactualityLaban etal. (2022), Consistency and CoherenceZhong etal. (2022), to assess the quality of the generated texts, aiming to balance readability, information retention, factuality, consistency, and coherence during evaluation.

These evaluations confirmed the effectiveness of our ReadCtrl methodology across a diverse range of tasks and datasets. Particularly, Mistral-ReadCtrl excelled in both seen and unseen settings, showcasing robust performance metrics. For instance, in the unseen MRPC dataset, Mistral ReadCtrl achieved the lowest readability gap (1.66), the highest factuality (0.8184), and excellent BLEU (0.3798) and SARI (44.4327) scores, significantly outperforming competitors like GPT-4 and Claude-3. In the WikiSmall dataset, it recorded a readability gap of just 2.09, the highest coherence score (0.9763), and a strong SARI score of 42.1033.Furthermore, detailed human and LLM-as-a-judgeLan etal. (2024) evaluations were conducted to compare Mistral-ReadCtrl with GPT-4 across different tasks and readability requirements.Mistral-ReadCtrl demonstrated superior performance, achieving a win rate of 52.1% in human evaluations and 58.3% in AI assessments, compared to GPT-4’s 35.7% and 38.4%, respectively.Notably strong results were observed in tasks involving WikiSmall (62.5% in Human Eval, 67.7% in AI Eval) and SNLI (66.7% in Human Eval).

ReadCtrl: Personalizing text generation with readability-controlled instruction learning (2)

2 Methodology

2.1 Task Overview

Our methodology is designed to evaluate the effectiveness of instruction tuning conditional on readability across a suite of tasks, specifically focusing on text simplifications, paraphrase generation, and semantic entailment generation. These tasks are strategically chosen to test the model’s capability in adjusting the complexity of its output to match specified readability levels. They serve a broad spectrum of applications, from enhancing educational material accessibility to refining technical documentation for diverse audiences.

  • Text Simplifications: Here, the aim is to reduce the readability level of the given input text, making it more accessible to a wider audience or readers with varying comprehension skills. This task challenges the model to simplify complex text while preserving its essential content and meaning, demonstrating the ability to decrease textual complexity upon demand.

  • Paraphrase Generation: In this task, the model is tasked with rewording the given text to produce a paraphrase that maintains the original’s readability level. This requires a nuanced understanding of language to ensure the output remains true to the input’s complexity and style, facilitating content reformulation without altering its accessibility.

  • Semantic Entailment Generation: This involves creating text that semantically follows from the given input, with the flexibility to increase or decrease the readability level. The model must grasp the underlying meaning of the input text and generate output that logically entails the input, demonstrating versatility in producing content with adjustable complexity levels.

We employ the instruction tuning approach conditional on readability for all these tasks. This method provides explicit instructions to the model to control the output text’s readability score, ensuring that the generated content aligns with the intended complexity level for the target audience. This approach underlines our belief that these tasks can all contribute to readability control generation, where, depending on the task—be it text simplification, paraphrase generation, or semantic entailment generation—the model is calibrated to generate output with the desired readability level. In text simplification, the goal is to lower the readability of the output relative to the input, while in paraphrase generation, the output’s readability should mirror the input’s. For the semantic entailment generation task, the output’s readability may vary, being either higher or lower than the input’s, thereby offering a versatile tool for adjusting text complexity across a wide range of contexts.

2.2 Instruction Design for Readability Control

To achieve the desired readability level across various tasks, we employ straightforward and singular instruction. This approach emphasizes the model’s ability to tailor its output to meet specific readability goals, demonstrating its versatility and effectiveness in readability control. The instruction is as follows:

"Given an input text, please output an entailment with a readability score around target readability score."

This concise instruction mandates the model to generate content that not only semantically follows from the given input but also aligns with a specified readability level, showcasing the model’s capacity to produce targeted outputs that cater to diverse comprehension needs and preferences.

2.3 Implementation and Readability Scoring

The readability of the generated text is quantitatively evaluated using a suite of established readability metrics. We calculate the following readability scores222More details can be found in AppendixA.:

These metrics are selected for their diverse approaches to assessing text complexity, offering a comprehensive understanding of the text’s readability. Subsequently, an average Reading Grade Level (RGL) is derived from these scores to represent the text’s overall readability. The integration of these readability assessments into our methodology allows a nuanced approach to generating text that meets the specified readability criteria. By adjusting the instruction based on the target RGL, we can fine-tune the complexity of the output, making our approach adaptable to a wide range of applications, from educational content to technical documentation. This process underscores the importance of readability in tailoring content to specific audience needs, a critical factor in communication effectiveness across various domains.

3 Experiments

3.1 Dataset

Our experimental framework is designed to assess the model’s performance across various tasks, specifically focusing on text simplification, paraphrase generation, and semantic entailment generation. To facilitate a comprehensive evaluation, we utilize six distinct datasets, two for each task, which enables us to explore the model’s capabilities in both seen and unseen settings. The datasets employed in our experiments are outlined as follows:

  • Text Simplification: For this task, we use the ASSET Alva-Manchego etal. (2020) and WikiSmall Zhu etal. (2010) datasets. ASSET is a diverse corpus for automatic sentence simplification, providing high-quality simplifications with multiple references per source sentence, making it ideal for instruction tuning and evaluation in seen settings. WikiSmall serves as an additional dataset for evaluating performance in an unseen setting, offering a different collection of simplified sentences derived from Wikipedia articles.

  • Paraphrase Generation: We utilize the PAWS Zhang etal. (2019) (Paraphrase Adversaries from Word Scrambling) and MRPC (Microsoft Research Paraphrase Corpus) Dolan and Brockett (2005) datasets. PAWS contains pairs of sentences paraphrasing each other, including those constructed through controlled word scrambling, making it suitable for training and the seen setting evaluations. MRPC offers a collection of sentence pairs labeled as paraphrases or not, sourced from online news sources, to test the model’s paraphrasing ability in unseen settings.

  • Semantic Entailment Generation: For this task, the SNLI (Stanford Natural Language Inference) Bowman etal. (2015) and MultiNLI (Multi-Genre Natural Language Inference) Williams etal. (2017) datasets are employed. SNLI is a large collection of sentence pairs annotated with textual entailment information, used for instruction tuning and seen setting evaluation. MultiNLI extends this to a broader range of genres and contexts, providing a robust challenge for the model in unseen settings.

In our experimental setup, instruction tuning is performed on the training sets of ASSET, PAWS, and SNLI to align the model’s output with specific readability goals. The effectiveness of this approach is then evaluated in two distinct settings: a seen setting, using the test sets of ASSET, PAWS, and SNLI, and an unseen setting, using the test sets of WikiSmall, MRPC, and MultiNLI. This methodology allows us to not only measure the model’s immediate response to the instruction tuning but also its generalizability and adaptability to different textual contexts and tasks.

3.2 Evaluation Metrics

To comprehensively evaluate the model’s performance across the different tasks, we employ a multifaceted set of metrics that assess various aspects of the generated texts. These metrics enable us to gauge the model’s effectiveness in adjusting readability, maintaining factual accuracy, and ensuring textual coherence and consistency. The following metrics are used:

  • Average Readability Score: This metric calculates the average readability level of the generated texts, providing insight into the overall accessibility of the content produced by the model.

  • Readability Gap (Delta): The readability gap is measured as the difference between the requested readability level and the actual readability level of the generated text. This metric assesses the model’s precision in hitting target readability levels.

  • Factuality: Factuality is evaluated based on the methodology from the SummaC Laban etal. (2022) work, which offers a means to assess the factual alignment of the generated text with the source content or input.

  • Consistency and Coherence: These aspects are measured using criteria from the UniEval Zhong etal. (2022) framework, which provides standardized metrics for evaluating the logical consistency and coherence of the text, ensuring that the generated content is not only readable but also logically structured and coherent.

  • SARI: The SARI (System output Against References and the Input sentence) Xu etal. (2016) metric is utilized to assess the quality of text simplification. It measures the model’s ability to produce simplified text that is both accurate and helpful, comparing the generated output against both the original text and reference simplifications.

  • BLEU: The BLEU (Bilingual Evaluation Understudy) Papineni etal. (2002) metric is applied to evaluate paraphrase generation and semantic entailment tasks. It quantifies the linguistic similarity between the generated texts and reference texts, indicating the model’s capability to produce coherent and contextually appropriate content.

These metrics collectively offer a robust framework for assessing the nuanced performance of the model across various dimensions of text generation, readability adjustment, and content quality.

ModelsReadability Gap\downarrowFactuality\uparrowConsistency\uparrowCoherence\uparrowBLEU\uparrowSARI\uparrow
ASSET (seen) | WikiSmall (unseen) - Text Simplification
Claude-33.6323 | 4.530.5221 | 0.46120.9301 | 0.93910.934 | 0.93960.1874 | 0.160640.6964 | 32.9996
GPT-3.52.8635 | 3.120.7231 | 0.67210.9641 | 0.94010.9648 | 0.92310.2739 | 0.19441.0061 | 33.9842
GPT-42.7465 | 2.690.6547 | 0.58920.9688 | 0.95560.9687 | 0.9490.2061 | 0.166639.7319 | 31.4657
Mistral-ReadCtrl1.8384 | 2.090.7687 | 0.71680.9423 | 0.94770.9653 | 0.97630.4317 | 0.432149.3521 | 42.1033
SNLI (seen) | MultiNLI (unseen) - Semantic Entailment Generation
Claude-34.6433 | 5.640.5102 | 0.39040.919 | 0.82920.9331 | 0.83460.0446 | 0.030348.3281 | 44.4344
GPT-3.52.8333 | 6.70.5176 | 0.39670.9049 | 0.88290.8982 | 0.8960.0875 | 0.037851.0201 | 44.0607
GPT-42.4733 | 3.360.5632 | 0.51670.9488 | 0.89610.9382 | 0.88790.105 | 0.056252.1153 | 46.4204
Mistral-ReadCtrl1.8733 | 2.210.7406 | 0.65420.9491 | 0.88040.9437 | 0.91220.183 | 0.113751.6644 | 43.8289
PAWS (seen) | MRPC (unseen) - Paraphrase Generation
Claude-32.4333 | 2.610.5141 | 0.47360.921 | 0.91540.9183 | 0.90120.2393 | 0.167938.3459 | 36.7783
GPT-3.51.5433 | 2.640.7443 | 0.58680.9761 | 0.96830.9746 | 0.96790.3873 | 0.205937.9808 | 37.3417
GPT-41.4467 | 2.190.7085 | 0.52030.979 | 0.96350.978 | 0.96390.3122 | 0.15334.3525 | 34.8477
Mistral-ReadCtrl0.6367 | 1.660.7871 | 0.81840.9677 | 0.96690.9735 | 0.97690.6649 | 0.379860.5332 | 44.4327

3.3 Evaluated Models

In our study, we evaluate a diverse set of models to understand their efficacy in handling tasks related to term definition generation, text simplification, and text complication, particularly focusing on adjusting text complexity according to specified readability levels. The models include:

  • GPT-3.5: As a precursor to GPT-4, GPT-3.5 has demonstrated substantial capabilities in generating human-like text across various tasks. It serves as a baseline to understand the incremental improvements brought about by its successors and other models.

  • GPT-4: The latest iteration from OpenAI’s GPT series at the time of our study, GPT-4, represents a significant leap in language model performance, offering improved comprehension and generation capabilities over its predecessors.

  • Claude-3: As a model known for its understanding and generation abilities, Claude-3 has been included as a baseline for its efficiency in handling various NLP tasks and its purported adaptability to instruction-based prompts, making it a relevant comparison for our instruction-tuned model.

  • Mistral 7B ReadCtrl: Our proposed model has been instruction-tuned to adjust the readability level of generated texts based on explicit instructions. Mistral 7B is designed to excel in the specific tasks of text simplification, paraphrase generation, and semantic entailment generation, leveraging instruction tuning to achieve precise control over the readability of its outputs.

Each of these models brings unique strengths and capabilities to the table, allowing us to conduct a comprehensive comparison that not only highlights Mistral 7B’s advancements in controlling readability but also situates these achievements within the broader context of current NLP technologies. By evaluating Mistral 7B against these established models, we aim to demonstrate its efficacy and potential applications in enhancing readability control in automatic text generation.

3.4 Results

3.4.1 Performance on seen tasks

Table 1 presents a performance comparison of Claude-3, GPT-3.5, GPT-4, and our model, Mistral-ReadCtrl, on seen tasks involving three datasets where instruction tuning was implemented: ASSET, SNLI, and PAWS. Regarding the Readability Gap, Mistral-ReadCtrl demonstrates superior precision in adhering to target readability levels, achieving the lowest scores across all datasets, indicating effective control over text readability. Factuality scores, which assess the accuracy of content compared to the original, show that Mistral-ReadCtrl maintains higher factual consistency than its counterparts. When evaluating Consistency and Coherence, which measure the logical flow and structural soundness of texts, Mistral-ReadCtrl performs robustly, though it is slightly outperformed by GPT-4 in the PAWS dataset. For BLEU and SARI metrics, critical for evaluating the linguistic and contextual appropriateness of text simplification and paraphrase generation, Mistral-ReadCtrl again posts the highest scores, showcasing its efficacy in producing high-quality, contextually appropriate responses.

3.4.2 Performance on unseen tasks

Table 1 illustrates the performance of four models — Claude-3, GPT-3.5, GPT-4, and Mistral-ReadCtrl — on unseen tasks, using the datasets WikiSmall for text simplification, MultiNLI for semantic entailment generation, and MRPC for paraphrase generation. These results are crucial for assessing each model’s ability to generalize beyond the data types encountered during training.

In the WikiSmall dataset, Mistral-ReadCtrl outperforms other models with the lowest readability gap of 2.09, suggesting superior control aligning with the target readability levels. It also achieves the highest factuality and coherence scores and significantly outstrips the competition in BLEU and SARI scores, indicating its effectiveness in maintaining content quality in text simplification tasks.

Mistral-ReadCtrl again shows notable performance for the MultiNLI dataset, which focuses on semantic entailment generation, with the lowest readability gap of 2.21 and the highest factuality and coherence scores among the models. However, while its BLEU score is the highest, it slightly trails behind GPT-4 in SARI, demonstrating strong but not leading performance in generating entailments that are semantically aligned with the input.

In the MRPC dataset, which tests the model’s ability to generate paraphrases, Mistral-ReadCtrl leads to a readability gap of 1.66, the highest factuality and coherence scores, and the best BLEU and SARI scores. This highlights its exceptional ability to generate paraphrases that not only adhere closely to the desired readability level but also maintain high levels of linguistic and contextual integrity.

Overall, the data from the unseen tasks confirm Mistral-ReadCtrl’s robust generalization capabilities across different types of text generation tasks, demonstrating its potential as a versatile tool in NLP applications where adapting to varied content types and maintaining consistent quality is critical.

ReadCtrl: Personalizing text generation with readability-controlled instruction learning (3)

4 Human Evaluation

4.1 Human Evaluation settings

Our human evaluation was conducted by 5 human evaluators and 1 expert evaluator333More details can be found on AppendixB..We randomly sampled 6 data from the test datasets of 6 data sets, and a total of 36 data appeared in the human evaluation.We give detailed instructions to the annotators: “You are evaluating two systems, both of which are trying to convert inputs to specific readability requirements to produce output suitable for the user. I will show you the input and output of the two systems on grade 2/5/8/11, respectively. Tell me which system’s output you prefer by specifying system 1 or system 2 or tie if the quality is the same. Please explain the reason for your preference.”. And they worked using our evaluation system to select preference; see Figure4 (left).Each time, we randomly shuffle the outputs of two systems (Mistral-ReadCtrl and GPT-4), and they can choose the one that better meets the readability requirements and has higher output quality. If they think the outputs of the two systems are tied, they can choose both.After we get judgments from multiple people per instance, we do not aggregate their labels before calculating the win rate but count them individually.We also used a similar setting of our human preference evaluation for AI evaluation with claude-3-opus-20240229 and gpt-3.5-turbo-0125 as the judge444More details can be found on AppendixC.

After preference evaluation, we then worked with one Linguistics expert for the readability control strategies annotation.We summarized 4 different reasons for each grade level (see Table2) and then asked the expert to use our evaluation system for readability control strategies annotation; see Figure4 (right).Each time, the expert needed to select all qualified control strategies for the output of our system (Mistral 7B ReadCtrl), where multiple selections are allowed.

Grade 2
Employ short, straightforward sentence structures100%
Focus only on essential details, omitting unnecessary complexity85.7%
Use very simple vocabulary and avoid complex words76.2%
Break down information into clear sequential steps35.7%
Grade 5
Introduce some more varied and content-specific vocabulary71.4%
Use longer sentences with conjunctions to combine ideas57.1%
Provide additional context and relevant details28.6%
Explain concepts more directly instead of narratives23.8%
Grade 8
Use complex sentence structures like passive voice66.7%
Employ richer descriptive language and vivid details54.8%
Incorporate academic and technical terminology47.6%
Establish clear logical connections between ideas21.4%
Grade 11
Construct elaborate compound-complex sentences42.9%
Use sophisticated vocabulary from all domains40.5%
Write with consistent formality and academic tone33.3%
Employ advanced stylistic techniques like figurative language23.8%

4.2 Human Evaluation Results

As shown in Figure3, the human evaluation prefers Mistral-ReadCtrl with an overall win rate of 49.4% as opposed to GPT-4, while AI evaluation gave us a broader win rate of 58.3%.Specifically, both seen settings (ASSET, SNLI, PAWS) and unseen settings (WikiSmall, MultiNLI, MRPC) exhibit consistent results across human evaluation and AI evaluation. Among these, the lead in WikiSmall and SNLI is most pronounced.Delving further, human annotations shed light on the operational tactics of Mistral-ReadCtrl. For example, when catering to Grade 2 readability, it implemented straightforward sentence structures with 100% adherence, focused on essential details 85.7% of the time, and employed very simple vocabulary in 76.2% of instances. For more advanced Grade 5 and 8 requirements, it adeptly introduced content-specific vocabulary (71.4% for Grade 5) and complex sentence structures (66.7% for Grade 8), illustrating the model’s dexterity in scaling complexity according to the readability demands.

5 Related Work

Early efforts for readability control in natural language generation (NLG) included microplanning steps to tailor the text to match different target reading levelsMoraes etal. (2016); Agrawal and Carpuat (2019); Marchisio etal. (2019).More recent studies, such as those by Luo etal. (2022), have investigated controllable abstractive and extractive approaches for generating summaries tailored for layman and expert audiences from biomedical documents.Concurrently, recent work Pu and Demberg (2023); Rao and Tetreault (2018); Yao and Yu (2021) examined the ability of the language models to adapt its outputs to different target audiences and writing styles, ranging from formal to informal, whereas Imperial (2022) highlighted that GPT2 models struggle with preserving the linguistic complexity of input prompts.Significant developments have also been made in models for Plain Language Summarization (PLS) from scientific papersDevaraj etal. (2021); Goldsack etal. (2023); Guo etal. (2023), focusing on generating simplified texts that retain the original content’s meaning.Unlike these methods, our novel "readability-controlled instruction learning (ReadCtrl)" method distinctly focuses on fine-grained readability control via direct instruction, allowing for precise adaptation of text complexity. This approach ensures that outputs meet specific readability demands, tested across text simplification, paraphrase generation, and semantic entailment generation tasks. Demonstrating its efficacy and versatility, ReadCtrl performs robustly in both ’seen’ and ’unseen’ settings.

Text Simplification aims to enhance the readability of sentences by reducing their linguistic complexity, with various important societal applications, such as increasing accessibility for those with cognitive disabilities and also for patient education, non-native speakers, and children with reading difficultiesMartin etal. (2020); Cao etal. (2020).Various aspects of simplified outputs have been addressed, includingadapting to specific levelsNishihara etal. (2019),incorporating edit operationsKumar etal. (2020); Mallinson etal. (2020),enforcing lexical and syntactic constraintsMartin etal. (2019),applying linguistically motivated syntactic rulesMaddela etal. (2020),and integrating complex span extraction and lay language generationChen etal. (2018); Kwon etal. (2022); Jiang and Xu (2024); Yao etal. (2023)into the simplification process.In contrast to traditional text simplification, which primarily focuses on controlling the extent of paraphrasing, our approaches are designed to produce succinct and informative output for various tasks in both seen and unseen settings, while maintaining different fine-grained levels of desired readability. Our unique contribution extends readability control beyond mere style transfer to a versatile, instruction-based framework that meets diverse user needs.

Finally, our work follows the instruction tuning techniqueZhang etal. (2023a) to help LLMs learn to follow readability-controlled instructions.Traditional supervised fine-tuning (SFT) techniques often struggle with fine-grained readability control, as they depend on manual annotation or synthetic data generation for enriching readability data, followed by model fine-tuning.This method requires considerable financial and time resources, with repeated tasks for each domain-specific application.Alternatively, recent advances in instruction learning offer a more generalized approach, as highlighted in several studiesWei etal. (2021); Wang etal. (2022); Honovich etal. (2022); Zhang etal. (2023b); Tran etal. (2023).Instruction learning operates on the premise that the model already possesses the necessary knowledge and skills to perform the target task but requires instructional data to learn how to follow the instructions effectively.By using a FLAN-style Instruction Fine-Tuning methodWei etal. (2021), we gathered task-specific instructions for ReadCtrl and conducted fine-tuning. Our evaluations, both automatic and human, on seen and unseen tasks, confirm ReadCtrl’s effectiveness, simplifying the adaptation process and broadening application scope with minimal data needs.

6 Conclusion

The ReadCtrl approach significantly enhances the adaptability of LLMs by dynamically adjusting content complexity to suit various readability requirements. Outperforming mainstream models like GPT-4 in evaluations, Mistral-ReadCtrl demonstrates its capability to produce nuanced, high-quality outputs, thereby showing the promise of personalized content generation.

7 Limitations

In this paper, we propose a new instruction-learning approach to enhance the controllability of readability levels. While this adjustment is not specific to any particular language, we conducted all of our experiments and analysis exclusively on English-language summarization datasets.Additionally, due to the resource limitation, our analysis was limited to Text Simplification (ASSET and WikiSmall datasets), Paraphrase Generation (PAWS and MRPC datasets), and Semantic Entailment Generation (SNLI and MultiNLI datasets), reflecting their prevalent application in NLG studies. Consequently, this paper does not extend to exploring style variations in non-English and other relevant tasks and datasets, such as some mentioned text-to-text generation datasets in the tutorial at ACL 2024Dou etal. (2023).Thus, the scope of this study is confined, and the results may not be universally applicable across different linguistic and stylistic contexts.For readability evaluation, studies have shown that readability formulas may not be ideal for medical textZheng and Yu (2017) because short texts (e.g., abbreviations and fragmented texts rather than complete sentences) are common in EHR notes. In future work, we may explore machine-learning-based approachesZheng etal. (2018) for readability in subdomains.Finally, due to resource constraints, we were unable to have actual grade 2, 5, 8, and 11 students provide pairwise preference feedback during our human evaluation. In the future, we plan to collect human evaluation feedback from more appropriate target groups to enhance the reliability of our results further.

8 Ethics Statement

While Mistral-ReadCtrl has demonstrated a high level of readability controllability on several NLG datasets dataset, this does not imply their use as general controllable interactive models (like some chatbot systems). These models should be thoroughly evaluated before being used in different settings to ensure reliability.

References

  • Achiam etal. (2023)Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, FlorenciaLeoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, etal. 2023.Gpt-4 technical report.arXiv preprint arXiv:2303.08774.
  • Agrawal and Carpuat (2019)Sweta Agrawal and Marine Carpuat. 2019.Controlling text complexity in neural machine translation.arXiv preprint arXiv:1911.00835.
  • Alva-Manchego etal. (2020)Fernando Alva-Manchego, Louis Martin, Antoine Bordes, Carolina Scarton, Benoît Sagot, and Lucia Specia. 2020.Asset: A dataset for tuning and evaluation of sentence simplification models with multiple rewriting transformations.arXiv preprint arXiv:2005.00481.
  • Anthropic (2024)Anthropic. 2024.The claude 3 model family: Opus, sonnet, haiku.
  • Bowman etal. (2015)SamuelR Bowman, Gabor Angeli, Christopher Potts, and ChristopherD Manning. 2015.A large annotated corpus for learning natural language inference.arXiv preprint arXiv:1508.05326.
  • Cao etal. (2020)Yixin Cao, Ruihao Shui, Liangming Pan, Min-Yen Kan, Zhiyuan Liu, and Tat-Seng Chua. 2020.Expertise style transfer: A new task towards better communication between experts and laymen.arXiv preprint arXiv:2005.00701.
  • Chen etal. (2023)Jin Chen, Zheng Liu, Xunpeng Huang, Chenwang Wu, QiLiu, Gangwei Jiang, Yuanhao Pu, Yuxuan Lei, Xiaolong Chen, Xingmei Wang, Defu Lian, and Enhong Chen. 2023.When large language models meet personalization: Perspectives of challenges and opportunities.ArXiv, abs/2307.16376.
  • Chen etal. (2018)Jinying Chen, Emily Druhl, BalajiPolepalli Ramesh, ThomasK Houston, CynthiaA Brandt, DonnaM Zulman, VarshaG Vimalananda, Samir Malkani, and Hong Yu. 2018.A natural language processing system that links medical terms in electronic health record notes to lay definitions: system development using physician reviews.Journal of medical Internet research, 20(1):e26.
  • Coleman and Liau (1975)Meri Coleman and TaLin Liau. 1975.A computer readability formula designed for machine scoring.Journal of Applied Psychology, 60(2):283.
  • Devaraj etal. (2021)Ashwin Devaraj, ByronC Wallace, IainJ Marshall, and JunyiJessy Li. 2021.Paragraph-level simplification of medical texts.In Proceedings of the conference. Association for Computational Linguistics. North American Chapter. Meeting, volume 2021, page 4972. NIH Public Access.
  • Dolan and Brockett (2005)Bill Dolan and Chris Brockett. 2005.Automatically constructing a corpus of sentential paraphrases.In Third international workshop on paraphrasing (IWP2005).
  • Dou etal. (2023)Yao Dou, Philippe Laban, Claire Gardent, and Wei Xu. 2023.Automatic and human-ai interactive text generation.arXiv preprint arXiv:2310.03878.
  • Goldsack etal. (2022)Tomas Goldsack, Zhihao Zhang, Chenghua Lin, and Carolina Scarton. 2022.Making science simple: Corpora for the lay summarisation of scientific literature.arXiv preprint arXiv:2210.09932.
  • Goldsack etal. (2023)Tomas Goldsack, Zhihao Zhang, Chenghua Lin, and Carolina Scarton. 2023.Domain-driven and discourse-guided scientific summarisation.In European Conference on Information Retrieval, pages 361–376. Springer.
  • Gunning (1952)Robert Gunning. 1952.The technique of clear writing.(No Title).
  • Guo etal. (2023)Yue Guo, Tal August, Gondy Leroy, Trevor Cohen, and LucyLu Wang. 2023.Appls: A meta-evaluation testbed for plain language summarization.arXiv preprint arXiv:2305.14341.
  • Guo etal. (2021)Yue Guo, Wei Qiu, Yizhong Wang, and Trevor Cohen. 2021.Automated lay language summarization of biomedical scientific reviews.In Proceedings of the AAAI Conference on Artificial Intelligence, volume35, pages 160–168.
  • Honovich etal. (2022)OrHonovich, Thomas Scialom, Omer Levy, and Timo Schick. 2022.Unnatural instructions: Tuning language models with (almost) no human labor.arXiv preprint arXiv:2212.09689.
  • Huang etal. (2023)XuHuang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, and Xing Xie. 2023.Recommender ai agent: Integrating large language models for interactive recommendations.arXiv preprint arXiv:2308.16505.
  • Imperial (2022)JosephMarvin Imperial. 2022.Uniform complexity for text generation.arXiv preprint arXiv:2204.05185.
  • Jiang etal. (2023)AlbertQ Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, DevendraSingh Chaplot, Diego delas Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, etal. 2023.Mistral 7b.arXiv preprint arXiv:2310.06825.
  • Jiang and Xu (2024)Chao Jiang and Wei Xu. 2024.Medreadme: A systematic study for fine-grained sentence readability in medical domain.arXiv preprint arXiv:2405.02144.
  • Kincaid etal. (1975)JPeter Kincaid, RobertP FishburneJr, RichardL Rogers, and BradS Chissom. 1975.Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel.
  • Kirk etal. (2024)HannahRose Kirk, Bertie Vidgen, Paul Röttger, and ScottA Hale. 2024.The benefits, risks and bounds of personalizing the alignment of large language models to individuals.Nature Machine Intelligence, pages 1–10.
  • Kumar etal. (2020)Dhruv Kumar, Lili Mou, Lukasz Golab, and Olga Vechtomova. 2020.Iterative edit-based unsupervised sentence simplification.arXiv preprint arXiv:2006.09639.
  • Kwon etal. (2022)Sunjae Kwon, Zonghai Yao, HarmonS Jordan, DavidA Levy, Brian Corner, and Hong Yu. 2022.Medjex: A medical jargon extraction model with wiki’s hyperlink span and contextualized masked language model score.arXiv preprint arXiv:2210.05875.
  • Laban etal. (2022)Philippe Laban, Tobias Schnabel, PaulN Bennett, and MartiA Hearst. 2022.Summac: Re-visiting nli-based models for inconsistency detection in summarization.Transactions of the Association for Computational Linguistics, 10:163–177.
  • Lan etal. (2024)Tian Lan, Wenwei Zhang, Chen Xu, Heyan Huang, Dahua Lin, Kai Chen, and Xian-ling Mao. 2024.Criticbench: Evaluating large language models as critic.arXiv preprint arXiv:2402.13764.
  • Luo etal. (2022)Zheheng Luo, Qianqian Xie, and Sophia Ananiadou. 2022.Readability controllable biomedical document summarization.arXiv preprint arXiv:2210.04705.
  • Maddela etal. (2020)Mounica Maddela, Fernando Alva-Manchego, and Wei Xu. 2020.Controllable text simplification with explicit paraphrasing.arXiv preprint arXiv:2010.11004.
  • Mallinson etal. (2020)Jonathan Mallinson, Aliaksei Severyn, Eric Malmi, and Guillermo Garrido. 2020.Felix: Flexible text editing through tagging and insertion.arXiv preprint arXiv:2003.10687.
  • Marchisio etal. (2019)Kelly Marchisio, Jialiang Guo, Cheng-I Lai, and Philipp Koehn. 2019.Controlling the reading level of machine translation output.In Proceedings of Machine Translation Summit XVII: Research Track, pages 193–203.
  • Martin etal. (2020)Louis Martin, Angela Fan, Éric DeLaClergerie, Antoine Bordes, and Benoît Sagot. 2020.Muss: Multilingual unsupervised sentence simplification by mining paraphrases.arXiv preprint arXiv:2005.00352.
  • Martin etal. (2019)Louis Martin, Benoît Sagot, Eric dela Clergerie, and Antoine Bordes. 2019.Controllable sentence simplification.arXiv preprint arXiv:1910.02677.
  • Moraes etal. (2016)Priscilla Moraes, KathleenF McCoy, and Sandra Carberry. 2016.Enabling text readability awareness during the micro planning phase of nlg applications.In Proceedings of the 9th International Natural Language Generation conference, pages 121–131.
  • Nishihara etal. (2019)Daiki Nishihara, Tomoyuki Kajiwara, and Yuki Arase. 2019.Controllable text simplification with lexical constraint loss.In Proceedings of the 57th annual meeting of the association for computational linguistics: Student research workshop, pages 260–266.
  • Papineni etal. (2002)Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002.Bleu: a method for automatic evaluation of machine translation.In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318.
  • Pu and Demberg (2023)Dongqi Pu and Vera Demberg. 2023.Chatgpt vs human-authored text: Insights into controllable text summarization and sentence style transfer.arXiv preprint arXiv:2306.07799.
  • Qin etal. (2024)Yiwei Qin, Kaiqiang Song, Yebowen Hu, Wenlin Yao, Sangwoo Cho, Xiaoyang Wang, Xuansheng Wu, Fei Liu, Pengfei Liu, and Dong Yu. 2024.Infobench: Evaluating instruction following ability in large language models.arXiv preprint arXiv:2401.03601.
  • Rao and Tetreault (2018)Sudha Rao and Joel Tetreault. 2018.Dear sir or madam, may i introduce the gyafc dataset: Corpus, benchmarks and metrics for formality style transfer.arXiv preprint arXiv:1803.06535.
  • Ribeiro etal. (2023)LeonardoFR Ribeiro, Mohit Bansal, and Markus Dreyer. 2023.Generating summaries with controllable readability levels.arXiv preprint arXiv:2310.10623.
  • Richardson etal. (2023)Chris Richardson, Yao Zhang, Kellen Gillespie, Sudipta Kar, Arshdeep Singh, Zeynab Raeesy, OmarZia Khan, and Abhinav Sethy. 2023.Integrating summarization and retrieval for enhanced personalization via large language models.arXiv preprint arXiv:2310.20081.
  • Senter and Smith (1967)RJSenter and EdgarA Smith. 1967.Automated readability index.Technical report, Technical report, DTIC document.
  • Shanahan etal. (2023)Murray Shanahan, Kyle McDonell, and Laria Reynolds. 2023.Role play with large language models.Nature, pages 1–6.
  • Sun etal. (2023)Jiao Sun, Yufei Tian, Wangchunshu Zhou, Nan Xu, Qian Hu, Rahul Gupta, JohnFrederick Wieting, Nanyun Peng, and Xuezhe Ma. 2023.Evaluating large language models on controlled generation tasks.arXiv preprint arXiv:2310.14542.
  • Tran etal. (2023)Hieu Tran, Zhichao Yang, Zonghai Yao, and Hong Yu. 2023.Bioinstruct: Instruction tuning of large language models for biomedical natural language processing.arXiv preprint arXiv:2310.19975.
  • Vajjala (2021)Sowmya Vajjala. 2021.Trends, limitations and open challenges in automatic readability assessment research.arXiv preprint arXiv:2105.00973.
  • Wang etal. (2023)Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, QiLiu, Tianyu Liu, and Zhifang Sui. 2023.Large language models are not fair evaluators.arXiv preprint arXiv:2305.17926.
  • Wang etal. (2022)Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, NoahA Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2022.Self-instruct: Aligning language models with self-generated instructions.arXiv preprint arXiv:2212.10560.
  • Wei etal. (2021)Jason Wei, Maarten Bosma, VincentY Zhao, Kelvin Guu, AdamsWei Yu, Brian Lester, Nan Du, AndrewM Dai, and QuocV Le. 2021.Finetuned language models are zero-shot learners.arXiv preprint arXiv:2109.01652.
  • Williams etal. (2017)Adina Williams, Nikita Nangia, and SamuelR Bowman. 2017.A broad-coverage challenge corpus for sentence understanding through inference.arXiv preprint arXiv:1704.05426.
  • Xu etal. (2016)Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016.Optimizing statistical machine translation for text simplification.Transactions of the Association for Computational Linguistics, 4:401–415.
  • Yao etal. (2023)Zonghai Yao, NandyalaSiddharth Kantu, Guanghao Wei, Hieu Tran, Zhangqi Duan, Sunjae Kwon, Zhichao Yang, Hong Yu, etal. 2023.Readme: Bridging medical jargon and lay understanding for patient education through data-centric nlp.arXiv preprint arXiv:2312.15561.
  • Yao and Yu (2021)Zonghai Yao and Hong Yu. 2021.Improving formality style transfer with context-aware rule injection.arXiv preprint arXiv:2106.00210.
  • Zeng etal. (2023)Zhiyuan Zeng, Jiatong Yu, Tianyu Gao, YuMeng, Tanya Goyal, and Danqi Chen. 2023.Evaluating large language models at evaluating instruction following.arXiv preprint arXiv:2310.07641.
  • Zhang etal. (2023a)Shengyu Zhang, Linfeng Dong, Xiaoya Li, Sen Zhang, Xiaofei Sun, Shuhe Wang, Jiwei Li, Runyi Hu, Tianwei Zhang, Fei Wu, etal. 2023a.Instruction tuning for large language models: A survey.arXiv preprint arXiv:2308.10792.
  • Zhang etal. (2023b)Xinlu Zhang, Chenxin Tian, Xianjun Yang, Lichang Chen, Zekun Li, and LindaRuth Petzold. 2023b.Alpacare: Instruction-tuned large language models for medical application.arXiv preprint arXiv:2310.14558.
  • Zhang etal. (2019)Yuan Zhang, Jason Baldridge, and Luheng He. 2019.Paws: Paraphrase adversaries from word scrambling.arXiv preprint arXiv:1904.01130.
  • Zheng and Yu (2017)Jiaping Zheng and Hong Yu. 2017.Readability formulas and user perceptions of electronic health records difficulty: a corpus study.Journal of medical Internet research, 19(3):e59.
  • Zheng etal. (2018)Jiaping Zheng, Hong Yu, etal. 2018.Assessing the readability of medical documents: a ranking approach.JMIR medical informatics, 6(1):e8611.
  • Zheng etal. (2024)Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, ZiLin, Zhuohan Li, Dacheng Li, Eric Xing, etal. 2024.Judging llm-as-a-judge with mt-bench and chatbot arena.Advances in Neural Information Processing Systems, 36.
  • Zhong etal. (2022)Ming Zhong, Yang Liu, DaYin, Yuning Mao, Yizhu Jiao, Pengfei Liu, Chenguang Zhu, Heng Ji, and Jiawei Han. 2022.Towards a unified multi-dimensional evaluator for text generation.arXiv preprint arXiv:2210.07197.
  • Zhou etal. (2023)Jeffrey Zhou, Tianjian Lu, Swaroop Mishra, Siddhartha Brahma, Sujoy Basu, YiLuan, Denny Zhou, and LeHou. 2023.Instruction-following evaluation for large language models.arXiv preprint arXiv:2311.07911.
  • Zhu etal. (2010)Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010.A monolingual tree-based translation model for sentence simplification.In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 1353–1361.
ReadCtrl: Personalizing text generation with readability-controlled instruction learning (4)

Appendix A Readability Metrics

Developed byKincaid etal. (1975), the Flesch-Kincaid Grade Level(FKGL) score is a metric that assigns higher scores to texts that are easier to read. It is calculated using the formula:

FKGL=206.8351.015(totalWordstotalSentences)84.6(totalSyllablestotalWords)FKGL206.8351.015totalWordstotalSentences84.6totalSyllablestotalWords\text{FKGL}=206.835-1.015\left(\frac{\text{totalWords}}{\text{totalSentences}}%\right)-84.6\left(\frac{\text{totalSyllables}}{\text{totalWords}}\right)FKGL = 206.835 - 1.015 ( divide start_ARG totalWords end_ARG start_ARG totalSentences end_ARG ) - 84.6 ( divide start_ARG totalSyllables end_ARG start_ARG totalWords end_ARG )

The Gunning Fog Index (GFI), proposed byGunning (1952), quantifies the level of formal education required to comprehend a text upon first reading. It is computed as:

GFI=0.4(totalWordstotalSentences+100longWordstotalWords)𝐺𝐹𝐼0.4totalWordstotalSentences100longWordstotalWordsGFI=0.4\left(\frac{\text{totalWords}}{\text{totalSentences}}+100\frac{\text{%longWords}}{\text{totalWords}}\right)italic_G italic_F italic_I = 0.4 ( divide start_ARG totalWords end_ARG start_ARG totalSentences end_ARG + 100 divide start_ARG longWords end_ARG start_ARG totalWords end_ARG )

where longWords are defined as words containing more than seven characters. Higher values indicate lower readability.

The Automated Readability Index (ARI), developed bySenter and Smith (1967), correlates to the U.S. school grade level needed to understand the text. It uses the formula:

ARI=4.71(totalCharacterstotalWords)+0.5(totalWordstotalSentences)21.43𝐴𝑅𝐼4.71totalCharacterstotalWords0.5totalWordstotalSentences21.43ARI=4.71\left(\frac{\text{totalCharacters}}{\text{totalWords}}\right)+0.5\left%(\frac{\text{totalWords}}{\text{totalSentences}}\right)-21.43italic_A italic_R italic_I = 4.71 ( divide start_ARG totalCharacters end_ARG start_ARG totalWords end_ARG ) + 0.5 ( divide start_ARG totalWords end_ARG start_ARG totalSentences end_ARG ) - 21.43

Developed byColeman and Liau (1975), the Coleman-Liau Index (CLI) focuses on characters rather than syllables to assess text readability. The formula for CLI is:

CLI=0.0588L0.296S15.8𝐶𝐿𝐼0.0588𝐿0.296𝑆15.8CLI=0.0588L-0.296S-15.8italic_C italic_L italic_I = 0.0588 italic_L - 0.296 italic_S - 15.8

where L𝐿Litalic_L is the average number of letters per 100 words, and S𝑆Sitalic_S is the average number of sentences per 100 words. This metric provides an estimate of the grade level required to understand the text.

ReadCtrl: Personalizing text generation with readability-controlled instruction learning (5)

Appendix B More details about human evaluation

We provide additional details on our human evaluation setup. Human preference annotators are 5 students who have completed a bachelor’s degree or above from an American university and are fluent in English.We add some tasks with known answers (i.e., cases where the most/least readable and good/bad quality text should be clear), enabling us to estimate the accuracy of annotators who work on these. Annotators with low accuracy on tasks with known answers are automatically removed from our worker pool. Only the annotators who passed these final tests were accepted to work on the human preference in this paper.We gave annotators fair compensation (20$/hrs).

Appendix C More details about LLM evaluation

To reduce the heavy human evaluation and make the evaluation easier to reproduce, we use a similar setting of our human preference evaluation for AI evaluation.Comparison-based feedback evaluation assesses the accuracy of LLM in deciding preferences between two responses.However, it is widely acknowledged that current LLMs exhibit significant positional bias Lan etal. (2024); Wang etal. (2023); Zheng etal. (2024); Zeng etal. (2023), i.e., LLMs tend to prefer responses based on their specific position in the prompt.We implement a rigorous verification process to mitigate the effects of positional bias to evaluate the real capability. Specifically, given responses Rasubscript𝑅𝑎R_{a}italic_R start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT and Rbsubscript𝑅𝑏R_{b}italic_R start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT to be compared, we obtain the comparison based on two orders, noted as Fac=Fc(Ra,Rb)subscriptsuperscript𝐹𝑐𝑎subscript𝐹𝑐subscript𝑅𝑎subscript𝑅𝑏F^{c}_{a}=F_{c}(R_{a},R_{b})italic_F start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT = italic_F start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( italic_R start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT , italic_R start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ) and Fbc=Fc(Rb,Ra)subscriptsuperscript𝐹𝑐𝑏subscript𝐹𝑐subscript𝑅𝑏subscript𝑅𝑎F^{c}_{b}=F_{c}(R_{b},R_{a})italic_F start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT = italic_F start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( italic_R start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT , italic_R start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ). The objective scores are computed by:

s=1Ni=1N1(L(Fa,ic,Fb,ic))𝑠1𝑁superscriptsubscript𝑖1𝑁1𝐿subscriptsuperscript𝐹𝑐𝑎𝑖subscriptsuperscript𝐹𝑐𝑏𝑖s=\frac{1}{N}\sum_{i=1}^{N}1(L(F^{c}_{a,i},F^{c}_{b,i}))italic_s = divide start_ARG 1 end_ARG start_ARG italic_N end_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT 1 ( italic_L ( italic_F start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_a , italic_i end_POSTSUBSCRIPT , italic_F start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_b , italic_i end_POSTSUBSCRIPT ) )

where L(Fac,Fbc)𝐿subscriptsuperscript𝐹𝑐𝑎subscriptsuperscript𝐹𝑐𝑏L(F^{c}_{a},F^{c}_{b})italic_L ( italic_F start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT , italic_F start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ) is true if and only if FacFbcsubscriptsuperscript𝐹𝑐𝑎subscriptsuperscript𝐹𝑐𝑏F^{c}_{a}\neq F^{c}_{b}italic_F start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ≠ italic_F start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT and Fac,Fbcsubscriptsuperscript𝐹𝑐𝑎subscriptsuperscript𝐹𝑐𝑏F^{c}_{a},F^{c}_{b}italic_F start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT , italic_F start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT align with ground-truth preference label. N𝑁Nitalic_N is the number of test samples.The prompts we used for LLM-as-a-judge (claude-3-opus-20240229 and gpt-3.5-turbo-0125) evaluation can be found in Table 4.

ParameterValue
Computing Infrastructure40GB NVIDIA A100 GPU
OptimizerAdam
Optimizer Paramsβ=(0.9,0.999),ϵ=108formulae-sequence𝛽0.90.999italic-ϵsuperscript108\beta=(0.9,0.999),\epsilon=10^{-8}italic_β = ( 0.9 , 0.999 ) , italic_ϵ = 10 start_POSTSUPERSCRIPT - 8 end_POSTSUPERSCRIPT
Learning rate3×1043superscript1043\times 10^{-4}3 × 10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT
Learning Rate DecayLinear
Weight Decay0
Warmup Steps200
Batch size128
Epoch5

Appendix D Hyper-parameter Settings

The experiments were executed using the version 4.37.1 of the transformers library released by Hugging Face. In Table 3, we report the hyperparameters used to train the models on our combined dataset. We use the Adam optimizer and employ a linearly decreasing learning rate schedule with warm-up step is 200. In this section, we detail our experimental setup, the datasets employed, and the evaluation strategy adopted for assessing the performance of our instruction-tuned LLMs in various BioNLP tasks. Furthermore, all experiments were conducted using two Nvidia A100 GPUs, each with 40 GB of memory. The CPU used was an Intel Xeon Gold 6230 processor, and the system was equipped with 192 GB of RAM.

Appendix E Experiments with GPT3.5, GPT4, Claude-3

All of our experiments were conducted on the version of GPT3.5, GPT4 and Claude 3 between 25 March 2023 and 13 April 2024 by using the OpenAI’s API.10 We set temperature = 1, top_p=1, frequency penalty = 0, and presence penalty = 0.

Appendix F ReadCtrl instruction following evaluation setting

We have plotted Figures 1, 6, 7, 8, 9, 10, and 11 by calculating the readability scores or reading levels of the outputs generated in response to prompts that request specific reading levels ranging from 1 to 12. These calculations were performed across all test sets of the six datasets mentioned in the Experiment section. Additionally, we calculated the standard deviation of the readability scores across these test sets to assess the consistency of the output’s readability.

Appendix G Examples of output generated by Mistral-ReadCtrl and GPT4 during ReadCtrl instruction following evaluation

Tables4 present distinct levels of output generated by the Mistral-ReadCtrl and GPT4 and their readability scores given by Flesch-Kincaid Grade Level (FKG), Gunning fog index (GFI), and Coleman-Liau index (CLI) metrics.

We will delve into the observed discrepancies between the Readability Gap and the performance curves in our evaluation, as demonstrated by our results for the PAWS and MultiNLI datasets. The Readability Gap, calculated as the average difference between the actual readability score of the output and the requested readability score across all samples, shows intriguing variations in behavior across different datasets.

For the PAWS dataset, although the Readability Gap appears almost perfect in Table 1, the corresponding curve does not exhibit as favorable performance. This anomaly can be attributed to the output readability distribution of PAWS, which is somewhat concentrated within a specific range (typically between 4-8). While this concentration allows for excellent performance within this median range, it leads to a less generalized performance across the full spectrum of readability levels (from 1-12). Therefore, even a small Readability Gap in numerical terms may not accurately reflect an evenly distributed ability to target all requested readability levels.

Conversely, the MultiNLI dataset exhibits a larger Readability Gap in Table 1, yet the performance curve approaches perfection. This suggests that while the average gap is larger, the outputs are more uniformly distributed across the entire range of readability levels, allowing for closer adherence to the target levels across a broader spectrum. This indicates a more generalized and adaptable performance despite the numerically larger gap.

This analysis underscores the importance of considering both the Readability Gap and the distribution of output readability scores when assessing model performance. A low Readability Gap might suggest excellent average performance but could conceal poor adaptability across a range of readability levels. Conversely, a higher Readability Gap might indicate a more uniform distribution of performance across all levels, suggesting a different kind of effectiveness.

Further investigation into these patterns for all six datasets employed in our study reveals similar trends.

ReadCtrl: Personalizing text generation with readability-controlled instruction learning (6)
ReadCtrl: Personalizing text generation with readability-controlled instruction learning (7)
ReadCtrl: Personalizing text generation with readability-controlled instruction learning (8)
ReadCtrl: Personalizing text generation with readability-controlled instruction learning (9)
ReadCtrl: Personalizing text generation with readability-controlled instruction learning (10)
ReadCtrl: Personalizing text generation with readability-controlled instruction learning (11)

Appendix H Prompts

TypePrompt
AI EvaluationYou are evaluating two systems, both of which are trying to convert inputs to specific readability requirements to produce output suitable for the user.I will show you the input and output of the two systems on grade 2/5/8/11, respectively. Tell me which systems output you prefer by specify system 1 or system 2 or tie if the quality is the same. Please explain the reason for your preference.Input:{input}System 1 output:grade 2:{system1_2}grade 5:{system1_5}grade 8:{system1_8}grade 11:{system1_11}System 2 output:grade2:{system2_2}grade 5:{system2_5}grade 8:{system2_8}grade 11:{system2_11}Please use the following json format for your output:grade 2 preference’: xxxx,grade 2 preference reasons’: xxxx,grade 5 preference’: xxxx,grade 5 preference reasons’: xxxx,grade 8 preference’: xxxx,grade 8 preference reasons’: xxxx,grade 11 preference’: xxxx,grade 11 preference reasons’: xxxxPlease only output your response following the required format, and do not output any other content. Now tell me your preference and reasons:
Readability Control Generation PromptBelow is an instruction that describes a task. Write a response that appropriately completes the request.### Instruction:please output an entailment at a {grade}-reading level.### Input:{input}### Response:

Table 4: All prompts used in our work.

DocumentFKGGFICLI
The tarantula, the trickster character, spun a black cord and, attaching it to the ball, crawled away fast to the east, pulling on the cord with all his strength.9.915.7414.8
Requested level: second-grade reading level(Mistral ReadCtrl)
The tarantula spun a black cord. He attached it to the ball. He crawled away to the east. He pulled on the cord with all his strength.0.04.20.8
Requested level: second-grade reading level(GPT-4)
The tarantula tricked others by making a black cord. It attached it to a ball and moved quickly towards the east while pulling the cord.4.66.65.9
Requested level: fifth-grade reading level(Mistral ReadCtrl)
The tarantula, the trickster character, spun a black cord. He attached it to the ball and crawled away fast to the east. He pulled on the cord with all his strength.2.66.74.2
Requested level: fifth-grade reading level(GPT-4)
The tarantula, who is known for playing tricks, made a black cord. He attached it to the ball and quickly crawled east, pulling the cord as hard as he could.4.47.336.2
Requested level: eighth-grade reading level(Mistral ReadCtrl)
The tarantula, the trickster character, spun a black cord and attached it to the ball. He crawled away fast to the east, pulling on the cord with all his strength.4.48.677.1
Requested level: eighth-grade reading level(GPT-4)
The tarantula, the sneaky character, made a black cord and tied it to the ball. It quickly moved east, pulling the cord strongly.4.28.086.2
Requested level: eleventh-grade reading level(Mistral ReadCtrl)
The tarantula, the trickster character, spun a black cord and attached it to the ball before crawling away to the east, pulling on the cord with all his strength.11.114.3614.8
Requested level: eleventh-grade reading level(GPT-4)
The character of the tarantula, known for trickery, spun a black cord. He attached this cord to the ball and rapidly scuttled eastward, tugging at the cord with maximum force.6.810.09.0
ReadCtrl: Personalizing text generation with readability-controlled instruction learning (2024)
Top Articles
Latest Posts
Article information

Author: Rev. Porsche Oberbrunner

Last Updated:

Views: 5723

Rating: 4.2 / 5 (73 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Rev. Porsche Oberbrunner

Birthday: 1994-06-25

Address: Suite 153 582 Lubowitz Walks, Port Alfredoborough, IN 72879-2838

Phone: +128413562823324

Job: IT Strategist

Hobby: Video gaming, Basketball, Web surfing, Book restoration, Jogging, Shooting, Fishing

Introduction: My name is Rev. Porsche Oberbrunner, I am a zany, graceful, talented, witty, determined, shiny, enchanting person who loves writing and wants to share my knowledge and understanding with you.