All Topics

Top

All Topics

Hi Folks,   I am trying to get Splunk response from java using below method ---------------- public String executeSearch(String searchQuery) throws IOException { //String apiUrl = hostName + ... See more...
Hi Folks,   I am trying to get Splunk response from java using below method ---------------- public String executeSearch(String searchQuery) throws IOException { //String apiUrl = hostName + "/__raw/services/search/jobs/export?search=" + URLEncoder.encode(searchQuery, "UTF-8").replace("+", "%20"); String apiUrl = hostName + "/__raw/services/search/jobs/export?search=" + URLEncoder.encode(searchQuery, "UTF-8") .replace("+", "%2B") .replace("%3D", "=") .replace("%20", "+") .replace("%2A", "*") .replace("%3F", "?") .replace("%40", "@") .replace("%2C", ","); URL url = new URL(apiUrl); System.out.println("Value of Splunk URL is " + url); HttpURLConnection connection = (HttpURLConnection) url.openConnection(); connection.setRequestMethod("GET"); String credentials = userName + ":" + password; String encodedCredentials = Base64.getEncoder().encodeToString(credentials.getBytes()); connection.setRequestProperty("Authorization", "Basic " + encodedCredentials); StringBuilder response = new StringBuilder(); try (BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream()))) { String inputLine; while ((inputLine = in.readLine()) != null) { System.out.println("Response Line: " + inputLine); // Print each line of the response response.append(inputLine); } } return response.toString(); } public static void main(String[] args) { if (args.length < 10) { System.out.println("Insufficient arguments provided. Please provide all required arguments."); System.exit(1); // Exit with error code 1 } String hostName = args[0]; String userName = args[1]; String password = args[2]; String query = args[3]; String logFileLocation = args[4]; String fileName = args[5]; String fileType = args[6]; String startDate = args[7]; String endDate = args[8]; String time = args[9]; try { SplunkRestClient client = new SplunkRestClient(hostName, userName, password); String searchResult = client.executeSearch(query); System.out.println(searchResult); // Write search result to file String filePath = logFileLocation + File.separator + fileName + "." + fileType; Files.write(Paths.get(filePath), searchResult.getBytes()); // Check if file is empty File file = new File(filePath); if (file.length() == 0) { System.out.println("File is empty. Deleting..."); if (file.delete()) { System.out.println("File deleted successfully."); } else { System.out.println("Failed to delete file."); } } else { // Validate file contents (assuming JSON data) try { new JSONObject(new String(Files.readAllBytes(Paths.get(filePath)))); System.out.println("File contents are valid JSON."); } catch (Exception e) { System.out.println("File is corrupt. Deleting..."); /*if (file.delete()) { System.out.println("Corrupt file deleted successfully."); } else { System.out.println("Failed to delete corrupt file."); }*/ } } } catch (IOException e) { System.out.println("Error occurred while executing search: " + e.getMessage()); System.exit(2); // Exit with error code 2 } } ------------------------------- I am calling this java file using bat file :: All Splunk host name set host_nam=https://log01.oss.mykronos.com/en-US/app/search/search?earliest=@d&latest=now set host_cfn=https://cfn-log01.oss.mykronos.com/en-US/app/search/search?earliest=@d&latest=now set host_dcust=https://koss01-log01.oss.mykronos.com/en-US/app/search/search?earliest=@d&latest=now :: Splunk user name set username=******** :: Splunk user password set password=******** :: Splunk search query for CAN, AUS, EUR set query_kpi=index=*kpi* level=ERROR logger=KPI* set query_wfm=index=*wfm* level=ERROR logger=KPI* set file_type="JSON" set start_date="" set end_Date="" set time="3600" %JAVA_PATH% com.kronos.hca.daily.monitoring.processor.SplunkRestClient %host_nam% %username% %password% "%query_nam_kpi%" "%logFileLocation%" "%file_name_nam_kpi%" %file_type% %start_date% %end_Date% %time%,  
"encrypt" property is set to "true" and "trustServerCertificate" property is set to "false" but the driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) en... See more...
"encrypt" property is set to "true" and "trustServerCertificate" property is set to "false" but the driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption: Error: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target. ClientConnectionId:  Any suggestions? 
I have inherited a Splunk system and this is one of the alerts | metadata index=index-cc* type=hosts | eval age = now()-lastTime | where age > 86400 | sort age d | convert ctime(lastTime) | fields ... See more...
I have inherited a Splunk system and this is one of the alerts | metadata index=index-cc* type=hosts | eval age = now()-lastTime | where age > 86400 | sort age d | convert ctime(lastTime) | fields lastTime,host,source,age | rename age as "Seconds Since Last Event" | search `Exempted_Dark_Devices`   How do I find the file Exempted_Dark_Devices?   Thank you
Our objective is to integrate OpenTelemetry into a new project and establish a connection with Splunk. We are specifically interested in initiating the transmission of OpenTelemetry (otel) data to Sp... See more...
Our objective is to integrate OpenTelemetry into a new project and establish a connection with Splunk. We are specifically interested in initiating the transmission of OpenTelemetry (otel) data to Splunk. OpenTelemetry is capable of generating traces, metrics, and logging data tailored for services. Currently, our focus is directed towards collecting telemetry data for a single service stack. However, if this proves successful, we are open to expanding and incorporating additional services in the future. To facilitate this integration, we are utilizing the OpenTelemetry Collector, a crucial component of the OpenTelemetry project and a freely available open-source tool. Although Splunk offers its version, we are presently not utilizing it. We seek confirmation that there are no associated costs for using the OpenTelemetry Collector, considering its contribution to OpenTelemetry, where vendors extend the functionality. Furthermore, our Splunk infrastructure, including the Search Head, Cluster Master, Indexers, and License Master, is hosted in the Cloud and managed by Splunk Support. As a Splunk Administrator, I am interested in understanding how to configure and onboard OpenTelemetry logs into Splunk. However, we are seeking clarification on potential costs and efforts associated with this initiative. Is it a separate subscription or something similar, as I currently lack information on this matter? Kindly assist in checking and providing an update.
Hi All, Our Splunk infrastructure, encompassing the Search Head, Cluster Master, Indexers, and License Master, is situated in the Cloud and managed by Splunk Support. Recently, there was a request ... See more...
Hi All, Our Splunk infrastructure, encompassing the Search Head, Cluster Master, Indexers, and License Master, is situated in the Cloud and managed by Splunk Support. Recently, there was a request from one of our application teams to integrate and ingest MongoDB Atlas (Host & Audit) logs into Splunk. Following the provided documentation, the application team attempted to install the Splunk OpenTelemetry (otel) connector on a Linux VM for a Proof of Concept (POC). In the process, they requested the generation of a token, which I fulfilled by generating one from our Splunk Cloud Search head. Unfortunately, the attempted integration did not yield the expected results. I am now seeking clarification on whether the token generated from the Splunk Search head is adequate, or if there is a need to generate an Organizational access token. If the latter is necessary, I would appreciate guidance on where and how to generate it. As the administrator of our Splunk Cloud instances, I am curious about the role of Splunk OpenTelemetry and whether it is included with Splunk Cloud. We receive multiple requests from users wanting to send OTEL logs into Splunk. If Splunk OpenTelemetry is indeed included, I would appreciate guidance on generating the organizational token and where this process should take place.   https://docs.splunk.com/observability/en/gdi/opentelemetry/components/mongodb-atlas-receiver.html https://docs.splunk.com/observability/en/gdi/opentelemetry/collector-linux/install-linux.html#otel-install-linux https://docs.splunk.com/observability/en/admin/authentication/authentication-tokens/org-tokens.html#admin-org-tokens   As I examine the documentation, it explicitly mentions Splunk Observability. I seek confirmation that I am following the correct procedure. The user attempted the installation using the same base64-encoded access token, but unfortunately, the result was once again unsuccessful. Additionally, the user has confirmed that there is internet access from the VM. At this juncture, we require guidance on how to generate an "organizational access token" to facilitate the integration process.   Can anyone kindly check and guide me on this please.    
Hi Team,  It seems many of the Machine Agents (around 200) are not associated with any applications. What action we need to to do for this and why machine agents are not mapped with application ? FY... See more...
Hi Team,  It seems many of the Machine Agents (around 200) are not associated with any applications. What action we need to to do for this and why machine agents are not mapped with application ? FYI -We are doing end to end monitoring by AppDynamics.  Thanks!
Hi All, Our Splunk infrastructure, including the Search Head, Cluster Master, Indexers, and License Master, is hosted in the Cloud and managed by Splunk Support. We are currently in the process of i... See more...
Hi All, Our Splunk infrastructure, including the Search Head, Cluster Master, Indexers, and License Master, is hosted in the Cloud and managed by Splunk Support. We are currently in the process of integrating the ZenGRC tool with Splunk. On the ZenGRC tool side, there is a Splunk connector. I have created an account using the Splunk authentication method, with admin privileges. Following the documentation, when I attempted to connect to Splunk via the Connectors section in ZenGRC, I encountered an error message: "Failed to Connect: Unknown error." https://reciprocitylabs.atlassian.net/wiki/spaces/ZenGRCOnboardingGuide/pages/562331727/Splunk+Connector    For reference, the ZenGRC documentation providing information on the Splunk Connector can be found above. When configuring the ZenGRC end, three pieces of information are required: Splunk Instance API URL: https://[yourdomain].splunkcloud.com:8089 UserName/Email: xxx Password: yyy   Upon attempting to connect, the process fails. Additionally, I have whitelisted the IPs as indicated in the confluence documentation. Kindly provide guidance on resolving this issue. IP Whitelisting - ZenGRC Wiki - Confluence (atlassian.net)      
Hello I'm using Splunk cloud and I have a user that wants to export search results that contains 277,500 events He is getting TO since the file is too large. Is there a way to export the file with... See more...
Hello I'm using Splunk cloud and I have a user that wants to export search results that contains 277,500 events He is getting TO since the file is too large. Is there a way to export the file without change the limitation ?  I cannot run curl command since we are using saml authentication    Thanks
Hi, I would like to have a xml panels code to be passed from Javascript to Splunk XML code dynamically. For instance, by default, the XML dashboard has 2 panels. After that, when javascript file is... See more...
Hi, I would like to have a xml panels code to be passed from Javascript to Splunk XML code dynamically. For instance, by default, the XML dashboard has 2 panels. After that, when javascript file is executed, the panels will be added dynamically accordingly to the conditions of the user to have 3 or more panels based on conditions. I have tried to passed the XML panel code from Javascript as a token to XML code but the Dashboard does not display the panel in Dashboard. Following is a sample code I have. XML <dashboard> ....... <row>        $table1$ </row> ........ </dashboard>   Javascript const panel = '<panel><table><search><query>index=_internal  | stats count by sourcetype</query><earliest>-24h@h</earliest><latest>now</latest><sampleRatio>1</sampleRatio></search><option name="count">20</option><option name="dataOverlayMode">none</option><option name="drilldown">none</option><option name="percentagesRow">false</option><option name="rowNumbers">false</option><option name="totalsRow">false</option><option name="wrap">true</option></table></panel>'         var parser = new DOMParser();         var xmlDoc = parser.parseFromString(panel, "text/xml"); //important to use "text/xml"         tokens.set("table1", xmlDoc.documentElement);         submitted_tokens.set(tokens.toJSON());   May I know how to solve this, please? Thank you in advance.  
Invalid key in stanza [WSUS_vUpdateInstallationInfo] in /opt/splunk/etc/apps/splunk_app_db_connect/local/db_inputs.conf, line 140: is_template (value: 0).  Invalid key in stanza [WSUS_vUpdateInstall... See more...
Invalid key in stanza [WSUS_vUpdateInstallationInfo] in /opt/splunk/etc/apps/splunk_app_db_connect/local/db_inputs.conf, line 140: is_template (value: 0).  Invalid key in stanza [WSUS_vUpdateInstallationInfo] in /opt/splunk/etc/apps/splunk_app_db_connect/local/db_inputs.conf, line 141: use_json_output (value: 0). can anyone please suggest what valid keynames can I use in place of these two? also, this problem has occured after migration and upgradation  
I need a query to get the new created use cases in the last 7 days and another query to get the fine tuned use cases for the last 7 days.
Hello all !  Can anyone help me in editing the below SPL so it can only list the _key - value paris for the entities ?  | rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/enti... See more...
Hello all !  Can anyone help me in editing the below SPL so it can only list the _key - value paris for the entities ?  | rest splunk_server=local /servicesNS/nobody/SA-ITOA/itoa_interface/entity report_as=text fields="_key,title,identifier,informational,identifying_name" | eval value=spath(value,"{}") | mvexpand value | eval entity_title=spath(value, "title"), entity_name=spath(value, "identifying_name"), entity_aliases=mvzip(spath(value, "identifier.fields{asd}"),spath(value, "identifier.values{}"),"="), entity_info=mvzip(spath(value, "informational.fields{}"),spath(value, "informational.values{}"),"=") This is a solution i found online but its rather complicated for my SPL skills at the moment....  Thank you in advance ! 
We have enabled Microsoft SAML for Splunk and out splunkd.log seems to be flooded with warnings like this: WARN UserManagerPro [7456 SchedulerThread] - AQR and authentication extensions not supporte... See more...
We have enabled Microsoft SAML for Splunk and out splunkd.log seems to be flooded with warnings like this: WARN UserManagerPro [7456 SchedulerThread] - AQR and authentication extensions not supported. Or authentication extensions is supported but is used for tokens only. user=nobody information not found in cache Found a few threads on AQR: - https://community.splunk.com/t5/Monitoring-Splunk/What-is-quot-AQR-quot-and-why-is-it-throwing-warning-messages-in/m-p/347016/highlight/true#M3064 - https://community.splunk.com/t5/Security/How-do-you-resolve-splunk-log-error-messages-after-switching/m-p/354479/highlight/true#M8882 Also the documentation on authentication.conf does not help me much. It seems the only way is to create a low level user (same as mentioned in the error) to suppress the error, which seems doable but I doubt this is the best way and unsure of side effects? Does any of you know more? 
We are currently looking to migrate our Standalone Splunk Enterprise server from the existing ec2 to a new ec2. The new server will be based on AWS Marketplace AMI for version 9.0.8: https://aws.amaz... See more...
We are currently looking to migrate our Standalone Splunk Enterprise server from the existing ec2 to a new ec2. The new server will be based on AWS Marketplace AMI for version 9.0.8: https://aws.amazon.com/marketplace/pp/prodview-l6oos72bsyaks?sr=0-1&ref_=beagle&applicationId=AWSMPContessa Reason for migration: we want to use the Marketplace AMI while upgrading the Splunk Version Since this migration involves going from version 8 to version 9, just copying over the apps(they contain the indexes) hasn't given us the result we wanted in our dev/test environment. We end up with no search UI loaded when the search app is copied over from the previous version 8 server to the version 9 server.  Has anyone else migrated their server this way i.e. jumping versions while migrating to the new server? What would the community recommend in terms of a scenario that we currently have? Would an in place upgrade to version 9 and then copy over to the new server be a better option/recommended?       
Hello Sirs, I would like to know the most useful Splunk App that can be suitable for Linux Auditd events. I have Linux devices such as Mangement Servers, DNS, HTTP Servers, Firewall, etc. These logs... See more...
Hello Sirs, I would like to know the most useful Splunk App that can be suitable for Linux Auditd events. I have Linux devices such as Mangement Servers, DNS, HTTP Servers, Firewall, etc. These logs carried by both Syslog Forwarder and Heavy forwarders.  Please suggest how to monitor the audit logs by which Splunk App? Thanks a bunch.
Hi All,  I have searched the community threads for posts similar to this, but none have quite addressed the issue I am seeing.  I have Splunk Cloud 9.1.2 and would like to retrieve logging from Sno... See more...
Hi All,  I have searched the community threads for posts similar to this, but none have quite addressed the issue I am seeing.  I have Splunk Cloud 9.1.2 and would like to retrieve logging from Snowflake.  Following this snowflake integration blog I have installed the Splunk DB Connect app (3.15.0) and the Splunk DXB Add-on for Snowflake JDBC (1.2.1). (using the self-service app install process on Victoria experience) When creating the identity in Splunk (matching the user created in Snowflake) this works fine, however creating the connection fails to validate (after trying for approximately 4-5 minutes) and gives me the following non-descript error:  connection string:      jdbc‌‌‌‌:snowflake://<account>.<region>.snowflakecomputing.com/?user=SPLUNK_USER&db=SNOWFLAKE&role=SPLUNK_ROLE&application=SPLUNK     Output: In the logs I can see slightly more:    2024-02-26 00:59:26.501 +0000 [dw-868 - GET /api/connections/SnowflakeUser/status] INFO com.splunk.dbx.connector.logger.AuditLogger - operation=validation connection_name=SnowflakeUser stanza_name= state=error sql='SELECT current_date()' message='JDBC driver: query has been canceled by user.'   This appears to hit some sort of timeout for the JDBC driver. The other thing I can see is the stanza appears to be blank in this result. However the default Snowflake stanza in the DB connect app matches the stanza created in the Snowflake blogpost.  Any troubleshooting help would be much appreciated. 
Hello, Currently I'm attempting to make a CommandHistory field a bit more readable for our analysts but I'm having trouble getting the formatting correct or maybe I'm just using the wrong command ... See more...
Hello, Currently I'm attempting to make a CommandHistory field a bit more readable for our analysts but I'm having trouble getting the formatting correct or maybe I'm just using the wrong command or taking the wrong approach. Basically our EDR dumps recent commands ran on a system into the CommandHistory field separated by a ¶ symbol. I'm trying to just replace that with a new line at ingestion time.  Made up example of what's in CommandHistory at the moment (I don't want to use real data I apologize): command1 -q lifeishard¶ReallyLong Command -t LifeIsHarderWhenYouCantFigureItOut¶ThirdCommand -u switchesare -cool¶One more command The search time commands that get me what I want in a field called commandHistory_sed: | eval commandHistory = CommandHistory | rex field=commandHistory_sed mode=sed "s/\¶/\n/g" This ends up looking like this: command1 -q lifeishard ReallyLong Command -t LifeIsHarderWhenYouCantFigureItOut ThirdCommand -u switchesare -cool One more command What I've tried in props.conf:  SEDCMD-substitute = 's/\¶/\n/g'  SEDCMD-alter = 's/\¶/\n/g' Neither work. We have many other Eval and FIELDALIAS statements under this sourcetype in props.conf that are functioning fine so I think I'm just not formatting the SED properly or I'm not taking the right approach. Does anyone have any advice on what I am doing wrong and what I need to do to achieve the result? Thank you for any help in advance!
I have created some indexes on splunk cloud can we not delete this indexes ? Because the option for delete is disabled in splunk cloud , can anyone help with this ?   
In a drilldown, I have 2 possible queries and they look like: qry1=index=fed:xxx_yyyy sourcetype="aaaaa:bbbbb:cccc" source_domain="$token_source_domain$" AND ( mid="$token_mid$" OR "MID $token_mid$"... See more...
In a drilldown, I have 2 possible queries and they look like: qry1=index=fed:xxx_yyyy sourcetype="aaaaa:bbbbb:cccc" source_domain="$token_source_domain$" AND ( mid="$token_mid$" OR "MID $token_mid$") qry2=index=fed:xxx_yyyy sourcetype="aaaaa:bbbbb:cccc" source_domain="$token_source_domain$" AND (icid="$token_icid$" OR mid="$token_mid$" OR "MID $token_mid$") if "$token_icid$==0 execute qry1 else execute qry2 How it can be achieve ? Chatgtp give this answer but not working index=fed:xxx_yyyy sourcetype="aaaaa:bbbbb:cccc" source_domain="$token_source_domain$" AND ( (($token_icid$=="0") AND (mid="$token_mid$")) OR (($token_icid$!="0") AND (icid="$token_icid$")) OR mid="$token_mid$" OR "MID $token_mid$" )