All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is the precedence of configuration options "correct"?  From this: https://docs.appdynamics.com/display/PRO45/Administer+the+Java+Agent I understand and have verified that system properties are o... See more...
Is the precedence of configuration options "correct"?  From this: https://docs.appdynamics.com/display/PRO45/Administer+the+Java+Agent I understand and have verified that system properties are overridden by environment variables. This is exactly the opposite of what I would expect and would like and is different from every other convention I have encountered. It also makes it hard to do things like setting up a default configuration baked into a docker image and be able to override those settings in a "run.sh" script on the java command line.
Hi, I have a query where I want to display the field name and source name as well. I am trying to com |set diff [search index=_internal sourcetype=splunkd | fieldsummary | fields field ... See more...
Hi, I have a query where I want to display the field name and source name as well. I am trying to com |set diff [search index=_internal sourcetype=splunkd | fieldsummary | fields field | rename field AS "splunkd fields" | append [ search index=_internal sourcetype=splunkd | fieldsummary | fields field ] ] | append [ search index=_internal sourcetype=mongod | fieldsummary | fields field | rename field AS "mongod fields" | append [ search index=_internal sourcetype=mongod | fieldsummary | fields field ]] Any help appreciated.
Hi, I am new to Splunk. I have below log which is capturing product id, Header product-id, 12345678900 Header product-id, 12345678901 Header product-id, 12345678900 I would like to group... See more...
Hi, I am new to Splunk. I have below log which is capturing product id, Header product-id, 12345678900 Header product-id, 12345678901 Header product-id, 12345678900 I would like to group by unique product id and count, 12345678900 2 12345678901 1 Here product-id is not a field in splunk. How can write a query for this?
"grid_w":1693,"solar_pct":0,"epoch":1586824635}} I need to ingest a JSON file with epoch time stamps.. its timestamp is the last part of the json string. i need help.. I think i need to... See more...
"grid_w":1693,"solar_pct":0,"epoch":1586824635}} I need to ingest a JSON file with epoch time stamps.. its timestamp is the last part of the json string. i need help.. I think i need to setup the index as a regular structured _json file but im stuck as to how to get the time stamp correct... THanks in advance gals, guys..
Hi Guys Unable to install paloalto add-on ,When i upload Paloalot addon v6.2 to splunk enterprise 8.03 " Error connecting to /services/apps/local: " and splunk web cannot be used," 500 Internal S... See more...
Hi Guys Unable to install paloalto add-on ,When i upload Paloalot addon v6.2 to splunk enterprise 8.03 " Error connecting to /services/apps/local: " and splunk web cannot be used," 500 Internal Server Error" The server encountered an unexpected condition which prevented it from fulfilling The request."
A number of applications and services in our environment use LOG4J for logging. Is there a CIM (Common Information Model) for LOG4J log types - or perhaps just the accepted / standardized field names... See more...
A number of applications and services in our environment use LOG4J for logging. Is there a CIM (Common Information Model) for LOG4J log types - or perhaps just the accepted / standardized field names? (The idea is to properly set up field extraction correctly the 1st time so we don't have to do it again in the future.) In other words what I need first and foremost are standardized field names for this log type - and if there is anything else that needs to be done to have a clean and performant field extraction for this log type that'll last a while w/o needing major revisions that might wreak havoc on existing dashboards, reports and searches. Event examples: 2020-04-13 15:20:53,379 ERROR [com.somejavaapp.exec.Server] (pool-1-thread-1) - Caught exception producing output java.net.SocketException: Connection reset by peer: socket write error at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(Unknown Source) at java.net.SocketOutputStream.write(Unknown Source) Show all 15 lines ... 2020-04-13 15:20:53,379 ERROR [com.somejavaapp.exec.Server] (Thread-149821) - Exception while sending progress data java.net.SocketException: Connection reset by peer: socket write error at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(Unknown Source) at java.net.SocketOutputStream.write(Unknown Source) Show all 8 lines ... etc... So perhaps the field names should be as follows? _time 2020-04-13 15:20:53,379 severity? log_level? ERROR java_class? [com.somejavaapp.exec.Server] java_class_package? com.somejavaapp.exec java_class_package_namespace? com.somejavaapp thread? (pool-1-thread-1) (Thread-149821) message? Caught exception producing output exception? java.net.SocketException: Connection reset by peer: socket write error java_traces? at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(Unknown Source) at java.net.SocketOutputStream.write(Unknown Source) Thanks!
I ingested a .CSV into Splunk which contained some patching information derived from another system. The problem with the report is that it's producing multiple rows with the same patch information.... See more...
I ingested a .CSV into Splunk which contained some patching information derived from another system. The problem with the report is that it's producing multiple rows with the same patch information. I ran it through Splunk to see if I could clean it up some by using the following query: index=patching sourcetype=patching | stats count by Patch_History, Patch_Number, Count | sort -Count | stats list(Patch_History) as Count, list(Count) by Patch_Number This query produced the following output: I'd like to add a subtotal for each section. I've tried using | addcolumns fieldname=" ", etc., but am not getting the desired results. Any help is appreciated!
When I click on an interesting field I have 100 values but it only displays the top 10. How can I view all values?
Hi guys, I need to on-board some log files, but only the last 200 bytes of each log files contain useful information. Is there any settings from props.conf that will only ingest the last XX cha... See more...
Hi guys, I need to on-board some log files, but only the last 200 bytes of each log files contain useful information. Is there any settings from props.conf that will only ingest the last XX characters of each log file? Each file only contains a single line of log. Or is there any other workaround could do the trick? Many thanks. S
I have a table having many multi-value fields. For example: items, cp and sp are multivalue fields. Using the following command ...| table items,cp,sp say,we have the following table as an oupu... See more...
I have a table having many multi-value fields. For example: items, cp and sp are multivalue fields. Using the following command ...| table items,cp,sp say,we have the following table as an ouput Events ---- items ----cp---sp 1 item1 5 6 item2 7 4 item3 8 9 2 item1 53 62 item2 17 14 item3 89 90 3 item5 50 55 item6 17 14 item7 110 90 My intent is to use stacked column chart such that each column is an item column having cp and sp values stacked and the items should be grouped by events. Different events can be considered as time,at different time cost/selling price of an item may differ.
We have configured Splunk plug-in for Jenkins version 1.7.4 on Jenkins side. We want to send Custom Metadata -> Data source "Queue Information " to index=index_statistics In addition to this ... See more...
We have configured Splunk plug-in for Jenkins version 1.7.4 on Jenkins side. We want to send Custom Metadata -> Data source "Queue Information " to index=index_statistics In addition to this data we are getting Console logs to index=jenkins_console. We cannot find where exactly can we disable sending Jenkins console logs data to Splunk. Anybody had any experience with that?
We have configured Splunk plug-in for Jenkins version 1.7.4 on Jenkins side. We want to send Custom Metadata -> Data source "Queue Information " to index=index_statistics Iin addition to this... See more...
We have configured Splunk plug-in for Jenkins version 1.7.4 on Jenkins side. We want to send Custom Metadata -> Data source "Queue Information " to index=index_statistics Iin addition to this data we are getting Console logs to index=jenkins_console. We cannot find where exactly can we disable sending Jenkins console logs data to Splunk. Anybody had any experience with that?
I would like to drilldown. If I use the drilldown editor and set to auto , this works, but this is unacceptable because we need the link to open in a new tab. So I tried setting the drilldown editor... See more...
I would like to drilldown. If I use the drilldown editor and set to auto , this works, but this is unacceptable because we need the link to open in a new tab. So I tried setting the drilldown editor to custom, which I've done a thousand times before including with other drilldowns on the dashboard I am having problems with. However for one specific drilldown, the custom drilldown made it so only a blank search would open in the new tab. In response I followed https://answers.splunk.com/answers/621427/custom-drilldown-search-not-working.html and coded the non-tokenized precise HTML link directly into my SimpleXML. However the encoding of the % character breaks on click. Literally if I copy and paste the drilldown HTML code from SimpleXML into my browser, it works, but not if I click to drilldown. Splunk Answer #621427 deals with the drilldown editor breaking %-character encoding, but this is a HTML drilldown breaking %-character encoding. Any assistance? SimpleXML: <base search> | where NOT duo_status="Active" | stats count(eval(isnull(mobile))) as nonairwatch_duo_inactive custom drilldown works SimpleXML: <base search> | eval last_vpn_access=strptime(last_vpn_access,"%Y-%m-%d %H:%M:%S") custom drilldown breaks SimpleXML drilldown HTML link: ... %20%7C%20eval%20last_vpn_access=strptime(last_vpn_access,%22%25Y-%25m-%25d%20%25H:%25M:%25S%22) correct syntax Browser URL when clicked: https:// ... %20|%20eval%20last_vpn_access=strptime(last_vpn_access,"%Y-%m-%d%20%H:%M:%S") incorrect syntax
Hi, (I see previous questions on this topic but haven't been able to find the answer to my issue). I have a report that has been running successfully for some time but the owner was recently remo... See more...
Hi, (I see previous questions on this topic but haven't been able to find the answer to my issue). I have a report that has been running successfully for some time but the owner was recently removed from Splunk. When I visit the report URL now I get the message "There are no results because the first scheduled run of the report has not completed." The report appears to run successfully however as this query says it was successful for status: index=_internal sourcetype=scheduler | stats count by savedsearch_name status user I notice that the URL to the report now has "nobody" in it and wonder if this relates to the issue. I feel like perhaps the report is running successfully but the URL to access it is incorrect? To get the URL I use these steps: Navigate to the app in Splunk Click on Reports Click on the name of the report in the Title column The result is: There are no results because the first scheduled run of the report has not completed. I saw documentation for reassigning owners to orphaned knowledge objects. We did change the owner for one of the reports but the result is the same. Thank you!
I have a correlation search for detecting when host stops sending logs. I enabled the search and set the title as below but when I receive the notables, my results show the hostname of the search hea... See more...
I have a correlation search for detecting when host stops sending logs. I enabled the search and set the title as below but when I receive the notables, my results show the hostname of the search head as the $host$ instead of the actual host who stopped sending logs. When I expand the notable, it does show the correct host under the "Additional Fields" section, just not in the title of the notable. The $Latest_Time$ doesn't work either, I'm not sure if it's possible to even use that or what I would have to put so it shows the Latest_Time / Last Time Reported field. Any help would be greatly appreciated! Title of correlation search: "Host $host$ stopped sending logs since $Latest_Time$" Query: | metadata type=hosts index=*| where relative_time(now(), "-1d") > lastTime AND lastTime > relative_time(now(), "-90d") | convert ctime(lastTime) as Latest_Time | sort -lastTime | table host,Latest_Time | lookup asset_lookup_by_str nt_host AS host OUTPUTNEW priority AS priority,bunit AS bunit |rename Latest_Time AS "Last Time Reported"
Hello, I have the splunk query below which has multiple sourcetype rows and if the row has x-correlation-id keywpord matching it needs to return Status x-correlation-id (or) return missing otherwise,... See more...
Hello, I have the splunk query below which has multiple sourcetype rows and if the row has x-correlation-id keywpord matching it needs to return Status x-correlation-id (or) return missing otherwise, there are couple of rows which has x-correlation-id in header and all rows are currently returning as "missing" , any ideas why the match is not working ? environment=prod sourcetype=* "x-correlation-id" | fields sourcetype, Properties.Headers{} , Properties.CorrelationId, CorrelationId, Category | eval Status=if(match("Properties.Headers{}", "X-Correlation-id"),"x-correlation-id", "missing") | dedup sourcetype | table sourcetype,Status, Properties.CorrelationId, CorrelationId, Category | sort str(Properties.Headers{}) sourcetype status Properties.CorrelationId o1 missing c1 o2 missing c2 o3 missing c3 o4 missing c4 o5 missing c5 o6 missing c6 o7 missing c7 08 missing c8
Hello, why do custom commands not work in Preview mode? Here is the INFO log I've recieved in my search.log: 04-13-2020 15:51:59.803 INFO ChunkedExternProcessor - Exiting custom search com... See more...
Hello, why do custom commands not work in Preview mode? Here is the INFO log I've recieved in my search.log: 04-13-2020 15:51:59.803 INFO ChunkedExternProcessor - Exiting custom search command after getinfo since we are in preview mode:mystream
How do I turn off the attempted communication from the JavaAgent to the Network Visibility agent? The JavaAgent running in a docker image emits this error message periodically to its log: [AD Threa... See more...
How do I turn off the attempted communication from the JavaAgent to the Network Visibility agent? The JavaAgent running in a docker image emits this error message periodically to its log: [AD Thread Pool-Global9] 12:57:01,474 ERROR NetVizAgentRequest - Fatal transport error while connecting to URL [http://127.0.0.1:3892/api/agentinfo?timestamp=0&agentType=APP_AGENT&agentVersion=0.3.0]: org.apache.http.conn.HttpHostConnectException: Connect to 127.0.0.1:3892 [/127.0.0.1] failed: Connection refused (Connection refused) I gather we do not have a license for the NVA - so do not need this. Also, is there any reason to have all the logging levels in the Java agent set to INFO? A huge amount of noise is generated that does not seem to have much value - might it be better to choose ERROR?
Trying to connect to Oracle 18c db, I have followed the instructions in the troubleshooting guide https://docs.splunk.com/Documentation/DBX/3.3.0/DeployDBX/Troubleshooting "Connect Splunk DB Connect ... See more...
Trying to connect to Oracle 18c db, I have followed the instructions in the troubleshooting guide https://docs.splunk.com/Documentation/DBX/3.3.0/DeployDBX/Troubleshooting "Connect Splunk DB Connect to Oracle Wallet environments using ojdbc8" without any luck. I'm getting the following error "[dw-58 - POST /api/connections/status] ERROR io.dropwizard.jersey.errors.LoggingExceptionMapper - Error handling a request: 2a01705792fdb22a java.lang.NullPointerException: null at com.splunk.dbx.connector.logger.AuditLogger.replace(AuditLogger.java:50) at com.splunk.dbx.connector.logger.AuditLogger.error(AuditLogger.java:44)" regardless of what I put in the connection string which leads me to believe that the message is just disguising the real issue. I have the logs set to "DEBUG" but no information other than the null pointer exception mentioned above. I am able to connect to the DB from the server using sqlplus and show the connection string using tnsping.
Hi, I want to preface I understand that props isn't fully processed if you install it on the universal forwarder. My question is about the difference between the install of a Splunk Universal Forw... See more...
Hi, I want to preface I understand that props isn't fully processed if you install it on the universal forwarder. My question is about the difference between the install of a Splunk Universal Forwarder vs Splunk converted to a Forwarder license. My setup is a fresh install of an ancient 6.5.2 SplunkForwarder and 6.5.2 Splunk in Forwarder Mode on two machines and mapped like this. Server1: Universal Forwarder --> Indexer Server2: Splunk (updated to run as a Forwarder) --> Indexer. On the indexer I have a props change (a trivial SEDCMD-test = s/a/o/g ) If I install the same serverclass on both servers that reads a /tmp/test.log and where I write some lines with letters a in them the Server 1's messages end up changed from a to o while Server 2's do not, they staay as a. I've tested it with multiple server installs (albeit on an old version 6.5.2). It seems to me that Splunk in forwarder, unlike a dedicated Splunk Universal Forwarder, applies some kind of tag that prevents downstream props/transforms changes to occur. A raw message coming from Universal Forwarder is then processed by the indexer's props/transforms while a message coming from Splunk in Forwarder mode does not. Note: I checked for any silliness. Both Servers send to the same indexer, have identical serverclass, sourcetype, inputs and outputs. And the props on the indexer only applies to a sourcetype not any specific host (btool matches on source servers) My questions are listed below. Can you confirm that there is some kind of cooked tag on events coming from Splunk (in forwarder mode) that tells downstream systems not to apply props/transforms and just write immediately to an index? Is there anything I can do on the server with Splunk in forwarder mode to behave exactly like a UniversalForwarder? Perhaps a etc conf change, or do I need to just uninstall it and setup Splunk UniversalForwarder from scratch (confirmed this works). How can I debug props/transforms issues between two servers. Turning on DEBUG mode didn't say anything useful in splunkd.log like "applying props [foo] on event 'Hello World'" Thanks!