All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am sending sauce labs test results to splunk and they are in this format:     { "testsuite": { "@name": "'PR-1082 #1' - device: Samsung_Galaxy_Note_8_real_us - test report id: 217", "@... See more...
I am sending sauce labs test results to splunk and they are in this format:     { "testsuite": { "@name": "'PR-1082 #1' - device: Samsung_Galaxy_Note_8_real_us - test report id: 217", "@tests": "6", "@failures": "1", "@skipped": "0", "@timestamp": "2020-08-06T20:48:40.510Z", "@time": "223.437", "@package": "com.example", "testcase": [ { "@name": "C120116_testStatusFilterExitButton", "@classname": "com.example.AlertsMockTest", "@time": "7.47", "failure": "NoMatchingActivityFoundException: Activity MainActivity not visible after 5000 milliseconds\r\nat com.example.MainActivityRobot.isPageShown(MainActivityRobot.kt:17)\r\nat com.example.robots.MainActivityRobot.isPageShown$default(MainActivityRobot.kt:36)\r\nat com.example.AlertsMockTest$C120116_testStatusFilterExitButton$1.invoke(AlertsMockTest.kt:57)\r\nat com.example.AlertsMockTest$C120116_testStatusFilterExitButton$1.invoke(AlertsMockTest.kt:19)\r\nat com.example.robots.MainActivityRobotKt.mainActivity(MainActivityRobot.kt:30)\r\nat com.example.AlertsMockTest.C120116_testStatusFilterExitButton(AlertsMockTest.kt:56)\r\nat java.lang.reflect.Method.invoke(Native Method)\r\nat org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)\r\nat org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)\r\nat org.junit.runners.ParentRunner.run(ParentRunner.java:363)\r\nat org.junit.runner.JUnitCore.run(JUnitCore.java:137)\r\nat org.junit.runner.JUnitCore.run(JUnitCore.java:115)\r\nat androidx.test.internal.runner.TestExecutor.execute(TestExecutor.java:56)\r\nat androidx.test.runner.AndroidJUnitRunner.onStart(AndroidJUnitRunner.java:395)\r\nat android.app.Instrumentation$InstrumentationThread.run(Instrumentation.java:2156)" }, { "@name": "C119770_testAlertTitleIs25Chars", "@classname": "com.example.AlertsMockTest", "@time": "6.835" }, { "@name": "C120116_testAlertsStatusFilterButton", "@classname": "com.example.AlertsMockTest", "@time": "7.181" }, { "@name": "C120106_testBackButtonExitsAppOnRegistrationLogin", "@classname": "com.example.MainActivityTest", "@time": "4.879" }, { "@name": "C117932_testShareButtonOpensShareSheet", "@classname": "com.example.MainActivityTest", "@time": "4.583" }, { "@name": "C134741_testZeroDashboards", "@classname": "com.example.ZeroDataMockTest", "@time": "7.248" } ] } }       I am trying this SPL:     source="http:test-results" sourcetype="_json" "testsuite.@failures">0 | rename testsuite.testcase{}.failure as testcase_failure, testsuite.testcase{}.@name as testcase_name | eval testcase_name=if(isnotnull(testcase_failure), testcase_name, "fail") | table testsuite.@name, testsuite.@failures, testsuite.@timestamp, testcase_name, testcase_failure | sort -testsuite.@timestamp | fieldformat testcase_failure=substr(testcase_failure,0,300)       However I get a table that looks like this:   How can I get the testcase_name column to only show just "C120116_testStatusFilterExitButton" for the test with a failure?
If a report is accelerated in the search app, are the other apps supposed to benefit from its acceleration? The report is shared with all app and the search app is exported to global. The reason is t... See more...
If a report is accelerated in the search app, are the other apps supposed to benefit from its acceleration? The report is shared with all app and the search app is exported to global. The reason is that I am getting the opposite behavior. Test1. I run the saved search in the search app context, it look up the accelerated data and returns results fast. Test2. I run the saved search in the exchange app and it is not accelerated. I am re-running the report acceleration on the app context I need, but found this behavior odd. Seems like the local.meta permissions are valid for the report share but not the summarization created by the report accel option. So the question is... is a report is accelerated and shared, can other apps take advantage of the accelerated report?
Hello, I have a very involved query involving 4 joins and I am looking for a way to speed it up. The purpose of this is for a dashboard that cleanly presents the needed data in a single table instea... See more...
Hello, I have a very involved query involving 4 joins and I am looking for a way to speed it up. The purpose of this is for a dashboard that cleanly presents the needed data in a single table instead of 5 separate panels (as per the requirements given to me). One of the problems I am running into is that some of the queries have different indexes, so I need to have the multiple searches for them, and appendcols doesn't seem to work since the only real thing in common between them is that 3 share the same index so there isn't one clean base search to use.  I'm not sure I could post the query here due to regulations, So I will try to be as specific as possible. How smart is splunk when it comes to queries? 3 of the queries I have the same index, so could I do something like the below: index=xyz (A and B and C) OR (D and E and F)  OR (G and H and I) | stats based on (A and B and C) | stats based on (D and E and F) | stats based on (G and H and I)   If you have any other tips or resources on speeding up joined queries that could help, that would be great as well
Hi, all, I'm trying to ingest pcap files using Splunk Stream with the config shown below.   [streamfwd] streamfwdcapture.0.interface = /tmp/pcap/ streamfwdcapture.0.offline = true streamfwdcapture... See more...
Hi, all, I'm trying to ingest pcap files using Splunk Stream with the config shown below.   [streamfwd] streamfwdcapture.0.interface = /tmp/pcap/ streamfwdcapture.0.offline = true streamfwdcapture.0.sysTime = false   Although pcap ingestion itself is successful, the record timestamps are not actual ones from pcap files, and the system time seems to be used instead. (To me, streamfwdcapture.0.sysTime in the conf does not properly work). Does anyone have the same experience, and any solutions? Below is the environment I tried Splunk 7.3.6 Splunk Stream 7.2.0
I have a search that performs a basic dbxquery connection and SQL search.  If the database table were to be dropped or there were any other errors encountered during the search, I would like the rest... See more...
I have a search that performs a basic dbxquery connection and SQL search.  If the database table were to be dropped or there were any other errors encountered during the search, I would like the rest of the SPL to keep chugging along. Is there a creative way to continue past the dbxquery error? This is a simple example looking up an Oracle DB name:   | makeresults | eval connection_name="<my db connection>" | map search="| dbxquery connection=$connection_name$ query=\"select * from global_nameWRONG;\"" | eval test="didn't get here" | table GLOBAL_NAME,test  
Hello,   I have json data and I am trying to search a specific field using a dynamic variable. I can properly search if I have an exact static field but not dynamic field. As an example, the below ... See more...
Hello,   I have json data and I am trying to search a specific field using a dynamic variable. I can properly search if I have an exact static field but not dynamic field. As an example, the below works:     source="main.py"| spath "cve.CVE_data_meta.ID" | search "cve.CVE_data_meta.ID"="CVE-2018-XXXX" | table cve.description.description_data{}.value     However, I am trying to feed a dynamic variable (test) from a different search to extract the correct value such as the below (shortened to make it easier to read):   source="main.py"| eval test="CVE-2018-XXXX" | spath "cve.CVE_data_meta.ID" | search "cve.CVE_data_meta.ID"=test | table cve.description.description_data{}.value     In the above case as an example I just hard coded the test variable but that value will come from a different search. Anyhow, the above does not work. I tried many variations and nothing really seems to work. I think the problem is due to the fact splunk thinks I am working with multiple value data and I cannot properly search off that. Anyhow, I think there has to be an easy solution that I cannot seem to get on my own. Hopefully someone can push me to the finish line with this.   Thanks, Marty
Hi all. Our Incident review page is getting needlessly large and I want to create a dashboard that will populate with a select few  rules_names or titles that I can see in the incident review tab in... See more...
Hi all. Our Incident review page is getting needlessly large and I want to create a dashboard that will populate with a select few  rules_names or titles that I can see in the incident review tab in Splunk ES.       |`incident_review` | fields - time         for example shows me fields that are useful because I also get the _time, owner/reviewer, rule_name and status. What I am looking for is all that and including the rules that came in and are also unassigned to an owner.
I have a transaction of events. In the first event of the transaction, it contains an event that I am using | rex field=_raw ..... to extract a two fields from: Rising_Server and Falling_Server. If... See more...
I have a transaction of events. In the first event of the transaction, it contains an event that I am using | rex field=_raw ..... to extract a two fields from: Rising_Server and Falling_Server. If I specify the time period to only include those events within the transaction and nothing else, how can I apply the RIsing_Host and Falling_Host fields to the other events.  More specifically, in the first event, it contains the values for those two fields. None of the other events contain those fields. However, I want to compare the hosts of the other events (that don't contain the fields) and see if they are the same as the Rising_Host and Falling_Host. I want to do this because I want to filter out any events that aren't coming from those two hosts. I've tried adding a  | where (host=Rising_Host OR host=Falling_Host) into the search, but that of course only shows the first event with those fields. Any suggestions on how to compare that field value to events without the field?
How do I get in touch with Splunk sales ?  I filled out the online form twice. No response after a week. I phoned Splunk sales, and got voicemail. I tried reaching the operator and got no response... See more...
How do I get in touch with Splunk sales ?  I filled out the online form twice. No response after a week. I phoned Splunk sales, and got voicemail. I tried reaching the operator and got no response.  I tried leaving voicemails and got no call backs.  If anyone can put me through a Sales contact with whom I can interact directly that would be appreciated. 
Hi Everyone, Below is my CSV fields and some values and I am doing continuous monitoring CSV file: TIMESTAMP, NAME, AGE, PHONE_NO,  ZIP 07/08/2020 12:00:00 PM, ABC, 20, XYZ, 123 07/07/2020 12:00:... See more...
Hi Everyone, Below is my CSV fields and some values and I am doing continuous monitoring CSV file: TIMESTAMP, NAME, AGE, PHONE_NO,  ZIP 07/08/2020 12:00:00 PM, ABC, 20, XYZ, 123 07/07/2020 12:00:00 PM, XYZ, 18, XYZ, 456 1. Splunk stores as 3 event, as Splunk is also considering field names as a event.. which I do not want to index fieldname as a event. I have tried several Splunk Answers but no luck or might be I am doing in a wrong way. Please suggest how to fix this. 2. TIMESTAMP, NAME, AGE, PHONE_NO,  ZIP 07/08/2020 12:00:00 PM, ABC, 20, XYZ, 123 07/08/2020 12:00:00 PM, PQR, 19, XYZ, 456 I have changed in 2nd row for NAME & AGE and modified Time so that Splunk can pick that latest time and display latest data on dashboard.. So problem is everytime saving excel, Splunk indexing all the data inside the excel including field name.. I do not want to index field names as a event and Splunk index only data for new entries or for those entries which I have make the changes to avoid duplicate data indexing again and again. It would be good if anyone can help me out to fix this issue. Thanks!
For "Endpoint - Malicious File Detection in Cloud Application playbook" tickets,how do I include the last six characters of the sha256 hash in the ticket title
There is a command fields in my logs and consists of unix commands. One value is  /usr/bin/ssh -q -o ConnectTimeout=5 -o BatchMode=yes zevsbdr66599.prodb.cally.org netstat -rn I am looking to extr... See more...
There is a command fields in my logs and consists of unix commands. One value is  /usr/bin/ssh -q -o ConnectTimeout=5 -o BatchMode=yes zevsbdr66599.prodb.cally.org netstat -rn I am looking to extract netstat -rn.  Can someone provide me a way to split ?  
I installed a Splunk search head on my Windows machine.  I installed a forwarder on a RHEL8 VM hosted by the same machine.  The forwarder monitors /var and /etc.  The systems can ping each other, and... See more...
I installed a Splunk search head on my Windows machine.  I installed a forwarder on a RHEL8 VM hosted by the same machine.  The forwarder monitors /var and /etc.  The systems can ping each other, and ports 9997 and 8089 are open.  I have restarted Splunk on both systems.  No errors occurred during installation or on any other operation, but no data appears on the search head. Please help.
In Splunk Enterprise, I am not able to see data changing in pie chart with time range can anyone please help me out? It's working fine when I run in search but for pie chart what needs to be added in... See more...
In Splunk Enterprise, I am not able to see data changing in pie chart with time range can anyone please help me out? It's working fine when I run in search but for pie chart what needs to be added in it, any token value? If so, can you please help me out with the steps? Thanks
Hey community I have my data in both MySQL and in Splunk. I'm trying to mimic the MySQL queries in Splunk so I can make a visual. My Data has five columns,:"Month", "Project", "Status", "Completion",... See more...
Hey community I have my data in both MySQL and in Splunk. I'm trying to mimic the MySQL queries in Splunk so I can make a visual. My Data has five columns,:"Month", "Project", "Status", "Completion", "Points". The first query sums the column "Points" only if that row includes a Status and Completion value of "Done" and then grouping it by Month. The second query is summing the column points just by the Month. The problem I'm running into is having them together in one Splunk Query as I'm trying to have both tables in one graph. Any suggestions?    select Month,sum(Points) from TABLE where Status = "Done" and Completion = "Done" group by Month ; & select Month,sum(Points)from TABLE group by Month ;  
Hello all, I am attempting to put together a search where I'm taking website status (200=allowed, etc) and breaking it into allowed and denied:   | stats count by user, status, http_method | ev... See more...
Hello all, I am attempting to put together a search where I'm taking website status (200=allowed, etc) and breaking it into allowed and denied:   | stats count by user, status, http_method | eval action=if(match(status,"^(2)([0-9]*)$"),"Allowed","Denied") | stats list(action) as Action, list(count) as "Count", sum(count) as total by user http_method | eval "Creds Entered"=if(http_method="POST","Yes","No") | sort - total | fields - total | table user Action Count "Creds Entered"      The issue I'm running into is the field for Action (allowed/denied) is not just populating allowed or denied, but multiples of each: So far I have reviewed the following with no success: https://docs.splunk.com/Documentation/Splunk/8.0.5/SearchReference/MultivalueEvalFunctions https://docs.splunk.com/Documentation/Splunk/8.0.5/SearchReference/Multivaluefunctions https://docs.splunk.com/Documentation/Splunk/8.0.5/SearchReference/Mvcombine https://docs.splunk.com/Documentation/Splunk/8.0.5/SearchReference/Makemv#Description https://docs.splunk.com/Documentation/Splunk/8.0.5/SearchReference/Nomv https://docs.splunk.com/Documentation/Splunk/8.0.5/SearchReference/Mvexpand https://community.splunk.com/t5/Splunk-Search/Dedup-within-a-MV-field/td-p/34957  https://community.splunk.com/t5/Splunk-Search/dedup-gives-different-result-if-a-table-command-is-used-before/td-p/253021 https://community.splunk.com/t5/Splunk-Search/dedup-results-in-a-table-and-count-them/td-p/40339 Any help would be greatly appreciated.
What is best practice for the HEC endpoint(s) for the "Phantom Remote Search" app in a clustered environment? Per the instructions in the url below for configuring the "Phantom Remote Search" app in... See more...
What is best practice for the HEC endpoint(s) for the "Phantom Remote Search" app in a clustered environment? Per the instructions in the url below for configuring the "Phantom Remote Search" app in a distributed environment, the HEC endpoint(s) are implied to be indexer server(s). https://docs.splunk.com/Documentation/PhantomRemoteSearch/1.0.14/PhantomRemoteSearch/Connecttodistributedsplunk Our environment uses clustered indexers. Can a heavy forwarder with a HEC endpoint be used to externalize search of a Phantom environment instead of the HEC endpoint(s) being on the indexer(s)?
Hi I have this error in my AppInspect Report: Do not supply a local.meta file- put all settings in default.meta. File: metadata/local.meta. In my app, under folder metadata, I can see the tw... See more...
Hi I have this error in my AppInspect Report: Do not supply a local.meta file- put all settings in default.meta. File: metadata/local.meta. In my app, under folder metadata, I can see the two files: local.meta default.meta Which place of default.meta do I put the local.meta content, start or end ? Can it be deleted? Is it recreated during app work-update? Is there a documentation describing this? best regards Altin
My custom alert is triggering mails for zero events. Not sure why it's printing for 0 when responseStatus > 399 I have created the alert with condition responseStatus 499>20 ..Trigger an email. But... See more...
My custom alert is triggering mails for zero events. Not sure why it's printing for 0 when responseStatus > 399 I have created the alert with condition responseStatus 499>20 ..Trigger an email. But it's printing zero record as well for every minute and triggering mail. Is it because of running the query in timechart instead of  using stats count or we should not create with stats count (responseStatus>399)| dedup requestId | stats count by responseStatus How to set custom alert for this?   
I'm trying to move our Splunk Stream App to a new system but am not finding an easy way to transfer the configuration since it is store in the KV store.  Any suggestions?