The Mensa App is live and fully functional! After a six-week sprint, this lightweight networking platform is now in the hands of a small group of Mensa members across several chapters, who are giving it its first real-world beta test. It combines a responsive front-end, secure backend authentication, and automated email notifications — all built for rapid deployment and real-world usability.
Key technical features:
- Full-stack architecture: React + TypeScript front end, Python/Django back end with REST API endpoints.
- Secure authentication: JWT-based token management with refresh flow, email-as-username login, and custom user model.
- User data storage: Profile photos and user metadata stored directly in PostgreSQL for simplicity during early development.
- Email integration: Reliable Mailgun API for notifications, with SMTP fallback.
- Scalable foundations: Code structured for future enhancements like Redux-based state management or S3 file storage.
Sections covered in this post:
- Timeline: Planning, Development, and Release
- TypeScript: Learning and Integration
- Backend Authentication Endpoints
- Frontend Token Storage and Duplication Lessons
- File Storage Decisions & Email-as-Username Design
- Email Support Challenges and Solutions
- Collaboration Lessons and Takeaways
- Release and Next Steps
Links:
- API Server (Django REST): https://github.com/bbornino/mensa_member_connect_backend
- Web Client (React + TypeScript): https://github.com/bbornino/mensa_member_connect_frontend
- Live App: https://www.namme.us
- Project Overview: Mensa Member Connect
The Timeline
- September: Writing the app specification, refining requirements, and getting the initial environment in place.
- October–mid-November: Full-stack development — front end, back end, and integration.
- Late November: Final testing, deployment, and release.
Over six weeks, the project gradually took shape. I handled the bulk of the back-end development, including API endpoints, file storage logic, and email integration, as well as establishing the main front-end architecture and integration points, while my collaborator contributed to the server environment and front-end GUI components.
TypeScript: My New Learning Curve
The only new technology for me on this project was TypeScript. Setting up Vite itself was quick — about 20 minutes — but the real challenge was figuring out how to structure TypeScript properly and integrate it with React in a maintainable way. To speed things up, I leaned on ChatGPT for generating boilerplate and functional components. Most of the time, the suggestions made sense immediately; I could read, understand, and adapt the code quickly without heavy debugging.
Beyond just copying snippets, I used these examples as a guide to understand best practices for type safety, component props, and API integration. TypeScript forced me to think a bit more about data structures and error handling upfront, which actually made my front-end code more predictable and easier to maintain. Other parts of the stack — Python/Django, PostgreSQL, REST endpoints — were already familiar territory, so this allowed me to focus on applying TypeScript in a real project context while still building a functional, maintainable front end.
Backend Authentication Endpoints
Initially, I relied on AI suggestions to generate endpoints like:
/api/user/login
/api/user/logout
/api/user/token-refresh
/api/user/password-reset
At first glance, this seems fine. But as the custom_user_views.py file grew, it ballooned to over 600 lines with more than 20 imports, many of which were only used for a single function. Maintaining this file became cumbersome and extending or debugging functionality quickly turned into a headache.
The real challenge came when I tried to split the file across multiple modules while keeping the existing endpoints intact — the goal was zero impact on the front end. Adding to the complexity, all user CRUD endpoints required authentication, but the four authentication endpoints had mixed requirements: some needed tokens, others didn’t. Coordinating this while refactoring the code safely required careful attention to detail, repeated testing, and a lot of trial and error.
In hindsight, a clean separation of endpoints would have been better:
/api/auth/login
/api/auth/logout
/api/auth/token-refresh
/api/auth/user-password-reset
This approach isolates authentication logic (login, logout, token refresh) from general user CRUD operations, making the backend simpler, more maintainable, and easier to extend — and would have minimized the headaches of token handling and conditional authentication rules.
Frontend Token Storage
Another challenge was how tokens were stored and refreshed on the frontend. I implemented the token refresh logic inside my customFetch utility, while my collaborator independently wrote similar logic in the AuthProvider.
// useApiRequest.ts
import { useNavigate } from "react-router-dom";
import { useState, useCallback } from "react";
import { customFetch } from "./customFetch";
export function useApiRequest<T = any>() {
const navigate = useNavigate();
const [error, setError] = useState<string | null>(null);
const apiRequest = useCallback(
async (url: string, options: RequestInit = {}): Promise<T | null> => {
const accessToken = localStorage.getItem("access_token");
const refreshToken = localStorage.getItem("refresh_token");
try {
const response: T = await customFetch(
url,
options,
accessToken,
refreshToken,
navigate
);
return response;
} catch (err: any) {
setError(err?.message ?? "Unknown error");
console.error(err);
// Re-throw the error so it can be caught and handled by the caller
throw err;
}
},
[navigate]
);
return { apiRequest, error };
}
Unfortunately, my collaborator preferred to maintain similar logic in the AuthProvider.
// Example: AuthProvider token refresh & axios interceptors
const refreshAccessTokenInternal = useCallback(async (): Promise<string | null> => {
if (isRefreshing.current && refreshPromise.current) return refreshPromise.current;
const refresh = localStorage.getItem("refresh_token");
if (!refresh) return null;
isRefreshing.current = true;
refreshPromise.current = (async () => {
try {
const response = await axios.post(TOKEN_REFRESH_API_URL, { refresh });
const { access } = response.data;
localStorage.setItem("access_token", access);
setAccessToken(access);
return access;
} catch (err) {
clearAuthStateRef.current?.();
throw err;
} finally {
isRefreshing.current = false;
refreshPromise.current = null;
}
})();
return refreshPromise.current;
}, [clearAuthState]);
// Axios interceptors to retry failed requests with refreshed token
useEffect(() => {
const requestInterceptor = apiClient.interceptors.request.use(config => {
const token = localStorage.getItem("access_token");
if (token) config.headers.Authorization = `Bearer ${token}`;
return config;
});
const responseInterceptor = apiClient.interceptors.response.use(
res => res,
async error => {
const originalRequest = error.config;
if (error.response?.status === 401 && !originalRequest._retry) {
originalRequest._retry = true;
const newToken = await refreshAccessTokenInternalRef.current?.();
if (newToken) {
originalRequest.headers.Authorization = `Bearer ${newToken}`;
return apiClient(originalRequest);
}
}
return Promise.reject(error);
}
);
return () => {
apiClient.interceptors.request.eject(requestInterceptor);
apiClient.interceptors.response.eject(responseInterceptor);
};
}, []);
Both approaches worked, but having parallel implementations created some redundancy and occasional confusion. In hindsight, a centralized solution like Redux would have made it easier to manage token state consistently across the app. At the same time, this experience reinforced a simple truth: giving someone a working tool doesn’t always mean it will be used — and that’s okay. The important thing is that the app functioned reliably in the end, and the token refresh logic was solid and predictable.
File Storage and Email as the Username
Another key set of design decisions involved user data storage and authentication identifiers. On the file storage side, we debated several approaches for storing user-uploaded content, like profile photos:
- Server filesystem – simple, but would have been a headache for local development and backups.
- Dedicated file service (AWS S3, etc.) – scalable, but added extra complexity for this first release.
- Database storage – storing files directly as binary data in PostgreSQL.
Given the limited scope and zero budget, we opted for database storage for now. It’s not ideal for production-scale apps, but it allowed us to develop, test, and release quickly. If the app sees wide adoption, the first new feature will be migrating to S3 or another file service.
On the authentication side, we made another pivotal choice: using email as the primary login identifier. Out of the box, Django expects a username for authentication. But requiring a separate username felt unnecessary — users already have a unique email. By customizing the user model, we could simplify login flows, reduce friction, and keep our API cleaner.
Here’s how the CustomUser model reflects these decisions:
class CustomUser(AbstractUser):
username = None # Remove username field entirely
email = models.EmailField(unique=True) # Make email the main identifier
USERNAME_FIELD = "email" # authenticate with email instead of username.
REQUIRED_FIELDS = [] # prevents Django from prompting for a username
objects = CustomUserManager()
member_id = models.IntegerField(null=True, blank=True)
city = models.CharField(max_length=48, blank=True, null=True)
state = models.CharField(max_length=24, blank=True, null=True)
phone = PhoneNumberField(blank=True, null=True)
role = models.CharField(max_length=16, default="member")
status = models.CharField(max_length=24, default="pending")
occupation = models.CharField(max_length=128, default="", blank=True)
industry = models.ForeignKey(
Industry,
on_delete=models.CASCADE,
null=True,
blank=True,
related_name="user_experts",
)
background = models.TextField(default="", blank=True, null=True)
profile_photo = models.BinaryField(null=True, blank=True)
availability_status = models.CharField(max_length=32, default="")
show_contact_info = models.BooleanField(default=False)
local_group = models.ForeignKey(
LocalGroup,
on_delete=models.CASCADE,
null=True,
blank=True,
)
This setup lets users log in seamlessly with email, while the database stores profile photos and other optional fields safely. Combined with our token refresh and API auth setup, this made the backend straightforward to implement — once I untangled the AI-suggested endpoints that had started off as a mess!
Email Support Challenges
Integrating email support proved more complex than I anticipated. Initially, I spent hours troubleshooting, convinced that my code was failing. In reality, the issue was DNS propagation delays — a reminder that not all bugs are in your code! To confirm this, I stripped everything down to the simplest possible email call using Django’s built-in send_mail function:
from django.core.mail import send_mail
send_mail(
'Welcome to the Mensa App!',
'Your account has been successfully created.',
'no-reply@mensaapp.org',
[user.email],
)
Even with this minimal setup, emails failed to send, clearly proving that the server and DNS were not yet fully ready — not our code or configuration.
For the production release, we chose Mailgun as our primary email service. Its HTTP API proved far more reliable than SMTP in cloud environments, especially for automated notifications like user approvals. Here’s a simplified example of the final approach we implemented:
def send_email_via_mailgun_api(
to_email: str,
subject: str,
text_content: str,
html_content: str = None,
from_email: str = None,
reply_to: str = None
) -> bool:
"""
Send email using Mailgun HTTP API.
Returns True if successful, False otherwise.
"""
mailgun_api_key = os.environ.get("MAILGUN_API_KEY")
mailgun_domain = os.environ.get("MAILGUN_DOMAIN")
if not mailgun_api_key or not mailgun_domain:
logger.warning("Mailgun credentials not configured.")
return False
api_url = f"https://api.mailgun.net/v3/{mailgun_domain}/messages"
from_address = from_email or settings.DEFAULT_FROM_EMAIL
data = {
"from": from_address,
"to": to_email,
"subject": subject,
"text": text_content,
}
if html_content:
data["html"] = html_content
if reply_to:
data["h:Reply-To"] = reply_to
try:
response = requests.post(api_url, auth=("api", mailgun_api_key), data=data, timeout=10)
return response.status_code == 200
except requests.exceptions.RequestException as e:
logger.error("Mailgun request failed: %s", e)
return False
This function handles all the core email needs for the app. We also created specialized wrappers for specific notifications, like alerting admins when a new user registers:
def notify_admin_new_registration(user_email, user_name, first_name=None, last_name=None):
"""
Notify admin that a new user registered and is awaiting approval.
"""
subject = f"New user registration: {user_name}"
text_content = f"{user_name} ({user_email}) has registered."
html_content = f"<p>{user_name} ({user_email}) has registered.</p>"
# Attempt Mailgun API first, fallback to SMTP
if send_email_via_mailgun_api(settings.ADMIN_EMAIL, subject, text_content, html_content):
return
logger.warning("Mailgun API failed, falling back to SMTP")
# ...fallback code omitted for brevity...
And a real-world usage example, notifying users upon account approval:
if old_status != "active" and target_user.status == "active":
try:
notify_user_approval(
target_user.email,
target_user.get_full_name(),
first_name=target_user.first_name,
last_name=target_user.last_name,
)
logger.info("Sent approval notification to user: %s", target_user.email)
except Exception as e:
logger.error("Failed to send approval notification to user %s: %s", target_user.email, e)
Key takeaways:
- Always check for infrastructure delays — sometimes DNS or external services are the real culprit.
- Using a dedicated email API (like Mailgun) is often more reliable than SMTP in cloud environments.
- Implementing a fallback mechanism ensures critical notifications still get delivered if the primary service fails.
This approach allowed us to finally release user-facing notifications with confidence, without the hours of uncertainty that plagued the initial setup.
Collaboration Lessons
Working on this project reminded me of my old teams at work, where daily scrums and quick check-ins kept everyone aligned and small issues from snowballing. Here, there wasn’t that kind of structure, so I had to be extra diligent — keeping notes, tracking my own progress, and thinking a few steps ahead to make sure my front-end and integration work didn’t hit unexpected snags.
Next time I collaborate on a project, I’ll make sure to schedule regular tech syncs, clearly outline who’s responsible for each piece, and maintain a shared record of decisions. Even with a small team, those habits make a huge difference in preventing miscommunication and avoiding friction that can slow everything down.
Release and Next Steps
The Mensa App is now live and fully functional: https://www.namme.us.
This project reinforced that even a small, unpaid project can deliver massive learning and a sense of accomplishment. Six weeks of focused development, troubleshooting, and integration produced a real-world, usable application that connects a community and demonstrates the power of hands-on learning.
Key Takeaways:
- AI and other tools can accelerate development, but double-check critical details like API endpoints and best practices.
- Short, focused sprints can produce meaningful results — planning doesn’t need to be months.
- First-time experiences, like TypeScript, are enough to challenge you without overwhelming your core skills.
- Collaboration works best when responsibilities are clearly defined — and different approaches are allowed.
- Release-ready MVPs don’t require perfection; they require focus, persistence, and a plan for scaling.