Vol.3 No.2 October,
2004
Editorial
& In This Issue (pp075-076)
M.
Gaedke Research
Articles and Reviews:
e-Prototyping: Iterative Analysis of Web User Requirements
(pp077-0094)
W-G Bleek, M. Jeenicke and R. Klischewski
Projects developing Web applications face problems when it comes to
identifying the Web users' requirements. There are a number of reasons
for this. It is unclear how to gather initial requirements from
potential users if there is no design artifact to communicate about.
Developers have difficulty identifying the needs of the Web
application users during the ongoing development process because of a
lack of proper communication concepts. Development teams for Web-based
systems include professionals from different disciplines with diverse
cultures. Members of the development team often belong to many different
organizations with varying stakes in the project. This article presents
a modified prototyping approach called \eprot. This approach includes
frequent releases of software versions (based on short development
cycles) as well as integrated mechanisms for gathering feedback from
users and other relevant actors via the live system. It underlines the
need to offer various communication channels to the users and to
systematically order the different streams of feedback to enable the
developers to identify the user requirements. \eprot encompasses the
management of an agile software development process and the systematic
evaluation of manifold feedback contributions.
Quantification of Authentication Mechanisms: a Usability Perspective
(pp095-123)
K. Renaud
Users wishing to use secure computer systems or web sites are required
to authenticate themselves. Users are usually required to supply a user
identification and to authenticate themselves to prove that they are
indeed the person they claim to be. The authenticator of choice in the
web environment is the simple password. Since the advent of the web the
proliferation of secure systems has placed an unacceptable burden on
users to recall increasing numbers of passwords that are often
infrequently used. This paper will review the research into different
types of authentication mechanisms, including simple passwords, and
propose a mechanism for quantifying the quality of different
authentication mechanisms to support an informed choice for web site
administrators.
Model-Driven Web Usage Analysis for the Evaluation of Web Application
Quality
(pp124-152)
P. Fraternali, P.L. Lanzi, M.
Matera and A. Maurino
So far, conceptual modeling of Web applications has been used primarily
in the upper part of the life cycle, as a driver for system analysis.
Little attention has been put on exploiting the conceptual
specifications developed during analysis for application evaluation,
maintenance and evolution. This paper illustrates an approach for
integrating the use of conceptual models in the lower part of the
application life cycle. The approach is based on the adoption of {\em
conceptual logs}, which are Web usage logs enriched with meta-data
deriving from the application conceptual specifications. In particular,
the paper illustrates how conceptual logs are generated and exploited in
Web usage evaluation and mining, so as to achieve a deeper and
systematic quality evaluation of Web applications. A prototype tool
supporting the generation of conceptual logs and the evaluation
activities is also presented.
On the Image content
of a Web Segment: Chile as a Case Study
(pp153-168)
A. Jaimes J. Ruiz-del-Solar, R. Verschae, R. Baeza-Yates,
C. Castillo, D. Yaksic, and E. Davis
We
propose a methodology to characterize the image contents of a web
segment, and we present an analysis of the contents of a segment of the
Chilean web (.CL domain). Our framework uses an efficient web-crawling
architecture, standard content-based analysis tools (to extract
low-level features such as color, shape and texture), and novel skin and
face detection algorithms. In an automated process we start by examining
all websites within a domain (e.g., .cl websites), obtaining links to
images, and downloading a large number of the images (in all of our
experiments approx. 383,000 images that correspond to about 35 billion
pixels). Once the images are downloaded to a local server, our process
automatically extracts several low-level visual features (color,
texture, shape, etc.). Using novel algorithms we perform skin and face
detection. The results of visual feature extraction, skin, and face
detection are then used to characterize the contents of a web segment.
We tested our methodology on a segment of the Chilean web (.cl), by
automatically downloading and processing 183,000 images in 2003 and
200,000 images in 2004. We present some statistics derived from this
last set of 200,000 images, which should be of use to anyone concerned
with the image content of the web in Chile. Our study is the first one
to use content-based tools to determine the image contents of a given
web segment.
Back
to JWE Online Front Page |