Reflection and Understanding, Weekly
I used a simple AI tutor in online courses for Fall 2024 - FBM210 Fashion Industry Technologies and CIS208 Database Systems to boost student engagement with provided content in an asynchronous course and as an entry ticket in a HyFlex flipped classroom. This tool provides instant feedback, motivating students to review content before classes. It's been effective, encouraging deeper understanding and readiness. The success of this tool depends on well-thought-out prompts that enhance, not replace, the learning materials. I’d like to share my process and experience; I’ll also touch on how a simple change like this might be integrated with the upcoming SUNY AI Tutor project.
The primary goal of integrating an AI tutor into my courses was to enhance student engagement and understanding before attending HyFlex sessions or progressing in asynchronous courses. I aimed to ensure that students were not merely skimming through materials but were actively engaging and thinking deeply about the content. The AI tutor was introduced to foster a more reflective and analytical approach to learning, making sure that students genuinely interacted with the course material prior to coming to class - or in the case of the asynchronous course, prior to beginning the higher-stakes assessment.
To implement this, the AI tutor was incorporated as mid-week assignments. Students were required to interact with a large language model (LLM) by using specific prompts provided during the course. They would input these prompts into their chosen LLM platform and submit a transcript of their conversation as part of their assignment. This method was designed to be low-stakes, with grades ranging from 0 to 2 points based on the depth of their responses—where 0 indicated minimal/no interaction, 1 for basic responses, and 2 for more engaging and probing dialogues. This grading system encouraged students to explore ideas thoroughly without the pressure of high-stakes outcomes.
The integration of an AI tutor aligns with my pedagogical goals, serving both as a formative assessment tool and as a means to ensure active engagement. It helps in identifying where students might struggle with the material and allows for timely instructional support. Moreover, this approach provides a safe space for students to express their questions without the fear of peer judgment, enhancing their learning experience and preparation for more formal assessments.
The development of the prompts was absolutely critical. I designed each prompt to use the Socratic method, guiding the AI to engage students as a questioning tutor rather than just giving answers. This approach encourages students to think more deeply and critically about the material.
SAMPLE PROMPT:
(1) Copy and paste the following text prompt into your favorite large language model (LLM). If you don't have a favorite, use ChatGPT. (2) Answer the questions that the AI poses, then (3) copy and paste the entire chat, including your answers, into the "Text Submission" box. (If you don't see a Text Submission box, click on the title for this assignment.
--- AI Prompt Begins Below: Copy and Paste after this line ---
I'm going to give you a list of questions. Ask me one question at a time and wait for my answer. Take the role of a TA or Tutor, using the Socratic Method. After each answer, give me brief feedback. Then, explain anything it seems like I don't understand. Ask if I'd like additional information on that question. When I indicate I'm finished, ask me the next question.
Sample Questions:
Describe which locking mechanism you think makes the most sense—resource locking or serializable transactions—and explain why.
If you were asked to balance reliability, security, and performance in a database, what would be your approach? Which of these might be most important in different scenarios, and why?
What challenges do you think could arise if you use pessimistic locking in an application with many users? How might it affect user experience or system performance?
What is the impact of pulling data, running lengthy processing on it, then updating records? How might you minimize that impact?
Why might ACID compliance be important for a database’s operation? What could happen if a database didn’t follow these properties?
GCC uses two DBMS systems—one that operates on the live Oracle server, and one that operates on a copy of data updated every six hours. When might you use each system, and why?
Can you describe backup levels of full backup versus change (incremental) backup? Why are both types of backup necessary?
What is the significance of maintaining an off-site backup? (This was not addressed in your textbook; an off-site backup is kept at a physical location other than your main database.)
What is the weakest point in any database security system? How might you improve security for that element?
Included in each exercise is a series of questions aimed at deepening students' understanding and analysis of the content. These questions are designed to challenge students to analyze, synthesize, and evaluate information, fostering more engaging learning.
This structured questioning helps students articulate their thoughts and navigate complex topics. It also allows me to see where student interest lies, so I can tailor off-the-cuff content to their specific interests.
(Click to expand each example)
ChatGPT said: If you were asked to balance reliability, security, and performance in a database, what would be your approach? Which of these might be most important in different scenarios, and why?
To balance reliability, security, and performance in a database, prioritize based on your specific needs. For high security, focus on encryption and access controls. For high performance, optimize queries and use caching. For high reliability, implement backups and redundancy.
ChatGPT said: That's a solid approach! You've correctly identified the key factors to consider.
Let's delve a bit deeper.
You mentioned prioritizing based on specific needs. Can you provide some examples of scenarios where one factor might outweigh the others? For instance, when might security be the most critical concern? Or when would performance be the top priority?
Financial institutions prioritize security, real-time trading systems prioritize performance, and e-commerce platforms prioritize reliability.
ChatGPT said: When creating a table, how can you decide which fields should be indexed?
You said: I'm not sure what indexing is.
ChatGPT said: No problem! Indexing in a database is a way to improve the speed of data retrieval. Think of it like an index in a book—it helps you quickly find specific information without reading the entire book.
When deciding which fields to index, consider:
• Fields used in searches: If a column is frequently used in WHERE clauses, indexing it can speed up query performance.
• Fields used for sorting: If a column is often used in ORDER BY clauses, indexing can help here too.
• Foreign keys and primary keys: These are commonly indexed since they're critical for relationships between tables.
Would you like more detailed info on how indexing works or best practices for choosing fields?
You said: So, is indexing most effectively applied for cases such as generating reports?
ChatGPT said: Yes, exactly! Indexing is particularly useful in scenarios like generating reports, where you're running frequent queries to retrieve and filter data. By indexing the columns most often used in searches, filters, or sorting, you can significantly speed up report generation.
For example, if your reports frequently query by a date range or customer ID, indexing those columns would make retrieving the relevant data much faster.
Would you like to dive deeper into indexing strategies for reporting, or are you good with that?
You said: No, i think i get it now. Next question.
ChatGPT said: **How might you use Photoshop to design a printed label that includes sizing and care instructions? What considerations should you keep in mind?**
Here I tried to include a step-by-step guide on Photoshop to design a printed label with size and care instructions:
Design Considerations:
- Label size and shape
- Font and size
- Color scheme and contrast
- Content requirements (size, care instructions, material, etc.)
- Bleed and trim areas
- Resolution (300 DPI for print)
Here’s the Photoshop Steps:
1. Create a new document (File > New):
- Set dimensions (e.g., 4 x 8 inches)
- Resolution
- Color Mode
2. Design your label:
- Include necessary information:
- Size
- Care instructions (e.g., Wash, dry, etc.)
- Material
- Brand logo
3. Organize layers:
- Separate text, graphics, and background elements
4. Adjust text and graphics:
- Font size and style
- Color and contrast
5. Add bleed and trim areas
6. Save and export:
- Save as PSD (for editing)
- Export as PDF or JPEG (for printing
ChatGPT said: You've outlined an excellent step-by-step guide! You’ve captured all the important design considerations, like label size, font, color scheme, and resolution. Keeping in mind bleed and trim areas ensures that no crucial elements are cut off during printing, which is a key detail.
A couple of extra considerations:
- **Use vector shapes for logos**: This ensures your logo stays crisp when resized or printed.
- **Test readability**: Make sure the font size is readable even on small labels, and the care instructions are clear.
- **Use layer styles**: Applying layer styles (like drop shadows or bevels) can help certain elements stand out more without making the label cluttered.
Would you like more tips on exporting files for different uses, like fabric printing or digital use?
No that’s great, I loved the label tags information. I’m ready for the next question.
(Unknown AI) **Describe how you would create a custom textile pattern in Photoshop and its importance in the overall garment design process.**
Id honestly play around with a image in photoshop, and repeat. Then use multiple tools to deisgn, even maybe take the rectangle marquee tool and cut part of the image. Adjust the colors with the curve tool all in photoshop. This is important as a designer especially a resort wear designer to be unique in my own art form and aesthetic.
(Unknown AI) That's a solid approach! Experimenting with repeating patterns, color adjustments, and using Photoshop's tools like the marquee and curve tool allows you to create unique and personalized textile designs. This can really set your resort wear apart by embedding your own artistic touch and aesthetic.
Would you like any tips on enhancing your custom textile patterns in Photoshop or how to streamline the pattern creation process for consistent results across your collection?
Love to hear how to streamline the pattern creation process for consist results?
(Unknown AI) To streamline your pattern creation process and ensure consistency, here are a few tips:
1. **Create a Seamless Pattern**: Use the "Offset" filter in Photoshop (Filter > Other > Offset) to create a seamless repeat. This helps align the edges so that the pattern flows continuously when repeated.
2. **Work in a Defined Canvas Size**: Start with a square canvas, like 1000x1000 pixels, to keep your pattern uniform. This ensures your pattern tiles seamlessly when applied to garments.
3. **Use Smart Objects**: Convert elements of your design into Smart Objects (right-click on a layer and select "Convert to Smart Object"). This allows you to easily edit or scale elements without losing quality, keeping your patterns flexible and editable.
4. **Create Pattern Swatches**: Save your design as a swatch or pattern preset in Photoshop (Edit > Define Pattern). This makes it easier to apply the same pattern across multiple designs or mockups quickly.
5. **Consistency in Colors**: Use Photoshop’s "Libraries" panel to save your brand's color palette, ensuring that you consistently apply your brand colors across different patterns.
6. **Pattern Mockups**: Apply your pattern to 3D garment mockups or flat lays to visualize how the pattern looks on fabric. This helps maintain consistency in scale and appearance across your collection.
Would you like more guidance on using these techniques in Photoshop?
Love this, no I am ready for the next question.
ChatGPT said: How do foreign keys help maintain relationships between tables?
You said: Foreign keys link the tables together by reusing data, so if the data is deleted on one table it is not necessarily detected from the whole database.
ChatGPT said: You're on the right track, but let’s clarify a bit more. Foreign keys establish and enforce a relationship between two tables by referencing the primary key of another table. This helps maintain referential integrity, ensuring that the relationships between tables are consistent.
• Explanation: A foreign key in one table points to the primary key of another table. This linkage ensures that data across tables stays synchronized, preventing records from being added or deleted in one table if it would violate the relationship constraints.
Would you like more details on how foreign keys work, or should we proceed to the next question?
You said: so it makes it so the information cannot contradict each other by looking at the primary key?
ChatGPT said: Exactly! Foreign keys ensure that the data in related tables remains consistent and doesn’t contradict itself. By referencing a primary key in another table, foreign keys enforce rules about what data can or cannot exist in the related table, preventing orphaned records and maintaining overall data integrity.
ChatGPT said: When creating a table, how can you decide which fields should be indexed?
You said: if their value is not dependent on another field.
ChatGPT said: That's a good point. However, deciding which fields to index involves considering several factors:
• Query Performance: Index fields that are frequently used in search conditions (WHERE clauses), join operations, or sorting operations (ORDER BY). Indexing these fields speeds up data retrieval.
• Uniqueness: Fields that have unique values, such as primary keys or unique constraints, should be indexed to enforce uniqueness and improve lookup efficiency.
• Frequency of Access: Index fields that are accessed often but updated infrequently. Indexes can slow down write operations (inserts, updates, deletes), so balance the need for fast read operations with the potential impact on write performance.
• Size and Type of Data: Indexing fields with large data types or long text fields might not always be practical. Instead, focus on fields with smaller, more frequent values.
• Explanation: While it's true that fields not dependent on others can be good candidates for indexing, the primary considerations are based on how the database queries the data and how frequently those fields are used.
Would you like more details on indexing, or should we move to the next question?
You said: so it also depends on how important they are, how much processing it takes from the computer, and if it is unique enough to be its own table?
ChatGPT said: Yes, you've captured several important aspects:
• Importance: Index fields that are crucial for queries and data retrieval are prioritized for indexing.
• Processing Overhead: Indexing fields involves a trade-off. While it speeds up read operations, it can slow down write operations (inserts, updates, and deletes) because the indexes need to be updated as well. Therefore, it’s essential to balance the need for fast queries with the impact on database performance.
• Uniqueness: Unique fields, like primary keys, naturally benefit from indexing as it enforces uniqueness and improves search efficiency. Deciding whether a field should be indexed or if it warrants its own table depends on how it’s used in the database schema.
• Explanation: Indexes are tools to optimize database performance but come with trade-offs. Properly assessing their use involves considering query patterns, data access frequencies, and performance impacts.
Would you like additional information on indexing, or shall we move to the next question?
You said: let move on to the next question
(Gemini) Question 4: Can you explain the concept of normalization and why it's important?
Its used to help logically organize data for better use, maintenance, efficiency, etc
(Gemini) That's a great summary! Normalization is indeed a process of organizing data into tables that minimize redundancy and maximize data integrity.
Here are some key benefits of normalization:
• Reduced Data Redundancy: By eliminating duplicate data, normalization helps to save storage space and prevent inconsistencies.
• Improved Data Integrity: Normalization ensures that data is consistent and accurate across the database.
• Enhanced Data Security: By reducing redundancy, normalization can help to improve data security.
• Simplified Maintenance: Normalized databases are generally easier to maintain and update.
Would you like to delve deeper into the different levels of normalization?
Really quick, mind providing a couple points on the security please?
(Gemini) Absolutely! Here are a couple of points on how normalization can improve data security:
• Reduced Attack Surface: By minimizing redundancy, normalization reduces the amount of data that attackers can potentially exploit. Fewer data points mean fewer opportunities for unauthorized access or modification.
• Improved Data Isolation: Normalization helps to isolate sensitive data, making it more difficult for unauthorized users to access or manipulate. For example, if customer credit card information is stored in a separate table, it reduces the risk of exposure if the main customer table is compromised.
(Gemini) Would you like to continue with the next question?
Great- thank you! Yep next question please.
(Gemini) Question 6: What are some common challenges you might face when working with large databases?
consistency, efficiency, security, storage..
(Gemini) Great points! Those are indeed some of the common challenges associated with large databases. Let's expand on them:
• Data Consistency: Maintaining data consistency across a large database can be difficult, especially when dealing with concurrent updates and complex relationships.
• Performance: Large databases can often experience performance issues, such as slow query execution and high resource utilization.
• Security: Protecting large amounts of sensitive data from unauthorized access and breaches is a major challenge.
• Storage: Managing storage requirements for large databases can be costly and complex, especially as the data grows over time.
Would you like to discuss any of these challenges in more detail?
In short, what are common solutions to these?
(Gemini) Here are some common solutions to the challenges of large databases:
• Indexing: Creating appropriate indexes can significantly improve query performance and reduce the overall workload on the database.
• Database Partitioning: Dividing a large database into smaller, more manageable partitions can enhance scalability and performance.
• Data Warehousing: For analytical workloads, data warehousing can provide a centralized repository for storing and analyzing large datasets.
• Denormalization: In certain cases, denormalizing data (introducing redundancy) can improve performance by reducing the number of joins required for queries.
• Caching: Caching frequently accessed data in memory can greatly improve query performance.
• Regular Maintenance: Performing regular maintenance tasks, such as vacuuming, analyzing, and optimizing the database, can help to maintain performance and prevent issues.
• Security Measures: Implementing robust security measures, such as encryption, access controls, and regular backups, is essential to protect sensitive data.
Would you like to discuss any of these solutions in more detail?
Let's come back to this at the end-- i'm curious about "denormalization". next question though please
… (more interaction) …
Do you remember the points I asked you to talk about at the end?
(Gemini) You were interested in discussing denormalization and common solutions to challenges in large databases.
Denormalization is a database design technique that intentionally introduces redundancy to improve performance. While normalization generally aims to reduce redundancy, there are cases where denormalization can be beneficial, such as when:
• Performance is a critical factor: Denormalization can reduce the number of joins required for queries, leading to improved performance.
• Read-heavy workloads: If your database is primarily used for reading data and updates are infrequent, denormalization can be a viable option.
• Specific query patterns: Denormalization can be tailored to optimize specific query patterns that involve frequent joins or aggregations.
However, it's important to note that denormalization can introduce risks such as increased data redundancy and potential inconsistencies. Therefore, it should be used judiciously and with careful consideration of the trade-offs involved.
Regarding common solutions to challenges in large databases, we discussed:
• Indexing: Creating appropriate indexes can significantly improve query performance and reduce the overall workload on the database.
• Database Partitioning: Dividing a large database into smaller, more manageable partitions can enhance scalability and performance.
• Data Warehousing: For analytical workloads, data warehousing can provide a centralized repository for storing and analyzing large datasets.
• Caching: Caching frequently accessed data in memory can greatly improve query performance.
• Regular Maintenance: Performing regular maintenance tasks, such as vacuuming, analyzing, and optimizing the database, can help to maintain performance and prevent issues.
• Security Measures: Implementing robust security measures, such as encryption, access controls, and regular backups, is essential to protect sensitive data.
Would you like to delve deeper into any of these topics?
Can you give me a real-world example of when 'denormalization' would be used?
(Gemini) Absolutely! A common real-world example of denormalization is in online retail applications.
Imagine an e-commerce website that needs to display product information, including its name, price, description, and category, on a product page. To avoid multiple database queries to retrieve this information, the website might denormalize the data by storing the product category name directly in the product table, instead of referencing it from a separate categories table.
Here's why this might be beneficial:
• Improved performance: By storing the category name directly in the product table, the website can retrieve all necessary information for a product in a single query, reducing the overall response time.
• Reduced database load: Fewer database queries mean less load on the database server, which can improve scalability and performance.
However, there are potential drawbacks to this approach:
• Increased redundancy: If the category name changes, it needs to be updated in all product records associated with that category, potentially leading to inconsistencies if not handled carefully.
• Reduced data integrity: Denormalization can make it more difficult to maintain data integrity, as changes in one table might not be reflected in related tables.
In this scenario, the decision to denormalize would depend on factors such as the expected frequency of category changes, the performance requirements of the application, and the trade-offs between improved performance and potential risks.
Awesome- thank you so much! have a great rest of your day! :)
(Gemini) You're welcome! I'm glad I could help. Have a great day yourself!
Integrating an AI tutor into my courses was successful as both an entry ticket and a motivational tool. Student feedback varied from neutral to highly enthusiastic. Response rates were near 100%, even though it was a low-stakes (and in one course, extra credit) assignment from a points perspective. Students felt that these tasks helped them process the course material and reflect on their thinking. They also felt more confident in their knowledge as a result.
The AI's ability to tailor its responses based on the depth of student input was particularly useful. It prompted students who gave vague answers to elaborate further. For those who provided detailed responses, the AI introduced next steps, enhancing their learning progression beyond the intended level. The AI's adaptive responses met each student at their current understanding level and helped them advance.
The AI also highlighted aspects of the content where students struggled or lacked detailed understanding. This insight allowed me to address trending gaps or miscommunications more effectively, improving reflective teaching strategies.
The likelihood is that this methodology will translate with relative ease into customized AI tutoring solutions. While SUNY is still in the formative phase of determining / developing an AI tutor for use in coursework, it is likely to incorporate reflective, low-stakes questions based on recorded course content. Activities that are open-ended and tied to the course content are the most likely types of activities to be easily integrated into any AI tutor design.