1

I am trying to build an application in eclipse IDE for my resume (my first) and have ran into a problem in my main file, where I am trying to import edu.stanford.nlp.pipeline.*; and have been playing around with the gradle import to see if that was the issue, as the JAR file names were different than what the stanford website had listed as the lines to copy and paste into the dependencies.

Here is the gradle build: ` plugins { // Apply the java-library plugin for API and implementation separation. id 'java-library' }

repositories {
    // Use Maven Central for resolving dependencies.
    mavenCentral()
}

dependencies {
    // Use JUnit Jupiter for testing.
    testImplementation 'org.junit.jupiter:junit-jupiter:5.8.1'

    // This dependency is exported to consumers, that is to say found on their compile classpath.
    api 'org.apache.commons:commons-math3:3.6.1'

    // This dependency is used internally, and not exposed to consumers on their own compile classpath.
    implementation 'com.google.guava:guava:30.1.1-jre'

    // Import Jsoup
    implementation 'org.jsoup:jsoup:1.16.1'

    // NLPS MODELS STANFORD CORENLP AI LIBRARY
    implementation 'edu.stanford.nlp:stanford-corenlp:4.5.4'
    implementation 'edu.stanford.nlp:CoreNLP-main:4.5.4'
    implementation 'edu.stanford.nlp:stanford-english-kbp-corenlp-models-current:4.5.4'
    implementation 'edu.stanford.nlp:stanford-corenlp:4.5.4:models'
    implementation 'edu.stanford.nlp:stanford-corenlp:4.5.4:models-english'
    implementation 'edu.stanford.nlp:stanford-corenlp:4.5.4:models-english-kbp'
}

tasks.named('test') {
    // Use JUnit Platform for unit tests.
    useJUnitPlatform()
}

`

Here is the main application:

package Stanford.Round;

import edu.stanford.nlp.pipeline.*;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;

import java.util.Properties;
import java.util.Scanner;`

public class Main {
    public static void main(String[] args) throws Exception {
        // Set up scanner for console input
        Scanner scanner = new Scanner(System.in);
        System.out.println("Enter a keyword:");
        String keyword = scanner.nextLine();

        // Scrape the Stanford Encyclopedia of Philosophy
        Document doc = Jsoup.connect("https://plato.stanford.edu/search/searcher.py?query=" + keyword).get();
        String text = doc.text();

        // Set up Stanford CoreNLP pipeline
        Properties props = new Properties();
        props.setProperty("annotators", "tokenize, ssplit, pos, lemma, ner, parse, sentiment");
        StanfordCoreNLP pipeline = new StanfordCoreNLP(props);

        // Annotate the scraped text
        CoreDocument document = new CoreDocument(text);
        pipeline.annotate(document);

        // Extract and print key ideas (this is a simplification; actual implementation would be more complex)
        for (CoreSentence sentence : document.sentences()) {
            String sentiment = sentence.sentiment();
            System.out.println(sentence + " (Sentiment: " + sentiment + ")");
        }
    }
}

`

Any and all help is GREATLY appreciated.

The application is throwing the following errors: Exception in thread "main" java.lang.Error: Unresolved compilation problems:

StanfordCoreNLP cannot be resolved to a type
    StanfordCoreNLP cannot be resolved to a type
    CoreDocument cannot be resolved to a type
    CoreDocument cannot be resolved to a type
    CoreSentence cannot be resolved to a type
    

at Stanford.Round.Main.main(Main.java:24)

I downloaded the CoreNLP-main folder from github a put it into the buldpath via class folder option, and have also put the enghlish-kbp jar in the classpath. JSoup is already there, and not having any of the same issues.

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.