All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team,    I would like to learn building splunk queries and dashboard. Is there any link or splunk page available where we can learn it via lab exercises or scenario based.. 
Hi splunker, Please tell me the setting that the blue frame and menu are not displayed when the users click object in Dashboard Studio. In Dashboard Studio, when users click dashboard's obo... See more...
Hi splunker, Please tell me the setting that the blue frame and menu are not displayed when the users click object in Dashboard Studio. In Dashboard Studio, when users click dashboard's oboject (e.g. Map, Single Value, table, image or etc..), blue frame around its object and download/search/full screen menu apear.   There are some cases that are a little annoying, so I would like to prevent the blue frame and menu from being displayed. Is this possible?
We have many completely diff events. Sometimes, we got a result based on Search 1. But we want to exclude some records from results of Search 2. E.G., the below are events   event1: {name: mike... See more...
We have many completely diff events. Sometimes, we got a result based on Search 1. But we want to exclude some records from results of Search 2. E.G., the below are events   event1: {name: mike, action: ok} event2: {name: cool, action: ok} event3: {name: mike, state: invalid}     My first search1 shall get event1, event2   ok | table name, action   My second search2 shall get event3   invalid | fields name   However, Search 1 shall exclude the result of Search 2, that is, only event 2 shall be gotten I am trying to achieve by   ok | search NOT name IN ([invalid | fields name]) | table name, action   But obviously it is unsuccessful Thanks for any advice
is there anyway to setup something on a dashboard that will tell me if a service on a remote centos box goes down. I run arkime on a centos8 box and i want a dashboard in splunk that will show me t... See more...
is there anyway to setup something on a dashboard that will tell me if a service on a remote centos box goes down. I run arkime on a centos8 box and i want a dashboard in splunk that will show me the status of the services arkimecaprute.service arkimeviewer.service elasticsearch.service is this possible?
JSON field=value pairing i have a log with single field name TestCategories and has multiple values in it like--x,y,z,.... how can i split it and create individual fields like TestCategories1: x ... See more...
JSON field=value pairing i have a log with single field name TestCategories and has multiple values in it like--x,y,z,.... how can i split it and create individual fields like TestCategories1: x and TestCategories2: y TestCategories: x,y,z,
I just built my first lookup table, because I have a csv of about 200 servers with the in different ip spaces and I need to perform 2 things . 1. confirm the ip's in the csv's are in splunk and 2. di... See more...
I just built my first lookup table, because I have a csv of about 200 servers with the in different ip spaces and I need to perform 2 things . 1. confirm the ip's in the csv's are in splunk and 2. display per ip what ports are listening. So my query has been this  index=* |stats count by src_ip , dest_port [|inputlookup networkservers.csv | fields "IPv4 Address" | rename "IPv4 Address " as query I have confirmed the lookup table is there and I can see it , and I can query the network, im just having issues with ingesting the 200+ ips as search items and then marrying the ports and prots with it . thanks in advance if this makes sense or am i looking at it all wrong ?
Actual log:- [{area: "CU", subid: "M", slgdattim: "2022022109515500", slgproc: "1362100032D2", slgmand: "200", sid: "S4D", instance: "dummy", counter: "0 ", slgltrm2: "jhuytvfghju789", slguser: "r4... See more...
Actual log:- [{area: "CU", subid: "M", slgdattim: "2022022109515500", slgproc: "1362100032D2", slgmand: "200", sid: "S4D", instance: "dummy", counter: "0 ", slgltrm2: "jhuytvfghju789", slguser: "r4uhjk8", slgtc: "SE38", dummy3: "Z0097TORRENT_INBOUND", term_ipv6: "167.211.0.9", sal_data: "Jump to BAP Debugger by user C7C2QC(200): SourceLine:233->228, ByteCode(offset):MOVL(433)->mvqk(423) (Prgm:Z0097TORRENT_INBOUND, Meth:READ_DATA, Incl:Z0097TORRENT_INBOUND_M01, Line:228)"}, {area: "CU", subid: "M", slgdattim: "2022022111111200", slgproc: "0797100035D2", slgmand: "200", sid: "S4D", instance: "dummy", counter: "0 ", slgltrm2: "PS2WVV212651CF", slguser: "C6W7RZ", slgtc: "VKM1",dummy2: "D01", term_ipv6: "167.211.0.6", sal_data: "Jump to BAP Debugger by user C6W7RZ(200): SourceLine:48->63, ByteCode(offset):cmpr(2588)->cmpr(2603) (Prgm:PDB, Form:USER_EXIT_FUELLEN, Incl:ffujlkk_ADD_FIELDS, Line:63)"}, {area: "CU", subid: "M", slgdattim: "2022022111122600", slgproc: "0797100035D2", slgmand: "200", sid: "S4D", instance: "dummy", counter: "0 ", slgltrm2: "PS2WVV212651CF", slguser: "C6W7RZ", slgtc: "VKM1", dummy1: "D01", term_ipv6: "167.211.0.6", sal_data: "Jump to BAP Debugger by user C6W7RZ(200): SourceLine:63->77, ByteCode(offset):cmpr(2603)->cmpr(2650) (Prgm:APDB, Form:USER_EXIT_FUELLEN, Incl:hgguytol_ADD_FIELDS, Line:77)"}] I want the above log break into three events like below & pls let me now the props.conf Event1:- [{area: "CU", subid: "M", slgdattim: "2022022109515500", slgproc: "1362100032D2", slgmand: "200", sid: "S4D", instance: "dummy", counter: "0 ", slgltrm2: "jhuytvfghju789", slguser: "r4uhjk8", slgtc: "SE38", dummy3: "Z0097TORRENT_INBOUND", term_ipv6: "167.211.0.9", sal_data: "Jump to BAP Debugger by user C7C2QC(200): SourceLine:233->228, ByteCode(offset):MOVL(433)->mvqk(423) (Prgm:Z0097TORRENT_INBOUND, Meth:READ_DATA, Incl:Z0097TORRENT_INBOUND_M01, Line:228)"}, {area: "CU", subid: "M", slgdattim: "2022022111111200", slgproc: "0797100035D2", slgmand: "200", sid: "S4D", instance: "dummy", counter: "0 ", slgltrm2: "PS2WVV212651CF", slguser: "C6W7RZ", slgtc: "VKM1",dummy2: "D01", term_ipv6: "167.211.0.6", sal_data: "Jump to BAP Debugger by user C6W7RZ(200): SourceLine:48->63, ByteCode(offset):cmpr(2588)->cmpr(2603) (Prgm:PDB, Form:USER_EXIT_FUELLEN, Incl:ffujlkk_ADD_FIELDS, Line:63)"}, {area: "CU", subid: "M", slgdattim: "2022022111122600", slgproc: "0797100035D2", slgmand: "200", sid: "S4D", instance: "dummy", counter: "0 ", slgltrm2: "jhuytvfghju789", slguser: "C6W7RZ", slgtc: "VKM1", dummy1: "D01", term_ipv6: "167.211.0.6", sal_data: "Jump to BAP Debugger by user C6W7RZ(200): SourceLine:63->77, ByteCode(offset):cmpr(2603)->cmpr(2650) (Prgm:APDB, Form:USER_EXIT_FUELLEN, Incl:hgguytol_ADD_FIELDS, Line:77)"}]
I am updating a CSV on disk via the search api using outputlookup.  Each time I run my script using the same source CSV, the CSV on disk is updated correctly. It shows the correct number of rows. W... See more...
I am updating a CSV on disk via the search api using outputlookup.  Each time I run my script using the same source CSV, the CSV on disk is updated correctly. It shows the correct number of rows. When I try to read the CSV via UI or the /jobs & /export API  it doesn't return the correct number of rows approximately 40% of the time.  I run my script 10 times, and 6 times it's a perfect match, the row count is what I expect it to be, 4 times it fails with random number of rows returned.   Each time, the number of rows in the CSV on disk is correct though.   From what I can see, I am not hitting quotas or limits and it doesn't seem to be related to system load. There isn't anything obvious to me in any logs either.  My flow is like this: API call with append=f to overwrite the CSV/set column names Multiple API calls with append=t to add data (10 rows of data per call) Final search to ensure the expected number of rows match My search to validate is like this, and this is where the inconsistencies happen:   | inputlookup my_new_file.csv | table <column name, column name....>   My CSV is currently 110 columns, and 5625 rows.  When it works, search returns the correct number of rows each time. When it doesn't work,  it consistently returns the wrong number.  To clarify, if i get 500 rows instead of the expected 5625, every time i search i will get 500 rows until I re-run my script to update the CSV on disk.    Thanks  
hi all, I'm completely new to Splunk and have some problems understanding the dataflow and what to configure where. i have here a working environment with 2 indexers, 1 heavy forwarder which is th... See more...
hi all, I'm completely new to Splunk and have some problems understanding the dataflow and what to configure where. i have here a working environment with 2 indexers, 1 heavy forwarder which is the search head too. all running version 7.3.6 on ubuntu 20.04. additionally there a several dozen windows servers and ~50 linux servers. a lot of them have splunkforwarder installed and send data to the indexers. this was set up some years ago by some guys that left the company meanwhile. my task now is to add data from the linux machines to splunk. as i have a working environment and a lot of stuff to see how it's done on other machines, it didn't sound too complicated. but... the task: have on all linux servers the same task running which creates a log file in /var/log/ my solution: on a server that already sends data to splunk, i ran: splunk add monitor /var/log/mylog the result: the data shows up in splunk. yepeee. easy. then i went to a server that does not send data to splunk. my solution: download and install splunkforwarder-7.3.6-47d8552a4d84-linux-2.6-amd64.deb splunk add forward-server indexer1:9997 splunk add forward-server indexer2:9997 splunk add monitor /var/log/mylog yepee. data shows up on the search head next task: have a dashboard with the data and have some filter options my solution: found a similar dashboard and tried to adopt it to my needs. not that easy, but i get it done. without the filters first. and then the problems start: the logfile contains headers and lots of other junk i cannot filter out easily. during my search on how to delete events, i found out that i have multiline events. i learned about LINE_BREAKER and SHOULD_LINEMERGE and indexes and other config stuff. and here the confusion starts: where do i have to configure what?  after reading some docs and different solutions here in the forum, i decided to start from zero with one of the linux servers. i deleted the results from this server from the main index. source=/var/log/mylog myserver | delete removed the forwarders and monitor from the linux server splunk remove forward-server indexer1:9997 splunk remove forward-server indexer2:9997 splunk remove monitor /var/log/mylog i created a new index on the 2 indexers and on the search head with the GUI. lets call it myindex and i didn't change the defaults i modified etc/users/admin/myapp/local/props.conf file on the search head, because that was the only place where i could find a reference to the monitor i've added. [mylog-too_small] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) [mylog] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) adding forwarders and monitor again: splunk add forward-server indexer1:9997 splunk add forward-server indexer2:9997 splunk add monitor /var/log/mylog What the heck? no data shows up on the search head What have I missed where? and in what order are all these props.conf files applied? I have some of them in different folders any help or hint is welcome
Hi I wanted to break the line from {"id" so that splunk will treat it as a new event from {"id from below event, I have mentioned the props.conf and the event, please find the same and let me know ... See more...
Hi I wanted to break the line from {"id" so that splunk will treat it as a new event from {"id from below event, I have mentioned the props.conf and the event, please find the same and let me know in case of any concerns.   INDEXED_EXTRACTIONS = JSON KV_MODE = none NO_BINARY_CHECK = true SHOULD_LINEMERGE = false SEGMENTATION = iso8601 #TIME_FORMAT=%YYYY-%MM-%DDT%H:%M:%SZ TIMESTAMP_FIELDS = started_on TRUNCATE = 0 category = Ver. 1    
Hello, I have setup a Splunk app that has Custom REST Endpoints which make Splunk API calls, currently via the python requests module.  I am authenticating to the API by grabbing the session key vi... See more...
Hello, I have setup a Splunk app that has Custom REST Endpoints which make Splunk API calls, currently via the python requests module.  I am authenticating to the API by grabbing the session key via the the Custom REST Endpoint's handler.  This works but I am wondering if either the SDK or Splunk python libraries can handle this a little more gracefully or not?  I noticed when building the dashboards that I can just call service.METHOD in SplunkJS to hit the Custom REST Endpoints so wondering if there is a python class/method that handles the authentication, etc such as this.   Thanks
Hi All, We have recently migrated from on-prem to cloud, how to make sure whether all the dashboards are working fine? Do we have any query to check the splunk errors?   Thanks, Vijay Sri S
I am currently working with DbConnect in order to send logs over to an Oracle db. I have a scheduled output set up on my heavy forwarder but it has not been sending logs to Oracle so I have been tryi... See more...
I am currently working with DbConnect in order to send logs over to an Oracle db. I have a scheduled output set up on my heavy forwarder but it has not been sending logs to Oracle so I have been trying to debug it with the  dbxoutput command. In the search on the HF, I am running the search successfully and from what I am seeing, it should be sending the logs over. However, when I query the Oracle table on in the SQL Explorer, I do not see any new logs being shown in the table. Does anyone know where I can look on the CLI to see if there are any error logs associated with this output?
Hi Team, This is Saurabh Dagar and I am working on Offshore company, on there, we have splunk server and we are trying to run splunk site but it is not opening. and the splunkd service is not g... See more...
Hi Team, This is Saurabh Dagar and I am working on Offshore company, on there, we have splunk server and we are trying to run splunk site but it is not opening. and the splunkd service is not getting restart, it is showing "the splunkd service is not in use and no one is using this service for long time" So my question is can how can we be able to start the service. thanks
hi All, We are migrating our AD provider to Azure AD, we generated the XML and cert file, and uploaded the XML via front end, its working for the login, but after validating, it lands on the below ... See more...
hi All, We are migrating our AD provider to Azure AD, we generated the XML and cert file, and uploaded the XML via front end, its working for the login, but after validating, it lands on the below page, mentioning this error. Verification of SAML assertion using the IDP's certificate provided failed. Error: failed to verify signature with cert  In the authentication.conf i see the stanza getting updated with new entries, only 2 lines remain of the old config : caCertFile = /opt/splunk/etc/auth/cacert.pem clientCert = /opt/splunk/etc/auth/server_old.pem I am not sure if client cert is being used at all, thinking something related to IDP chains, seeing other answers, https://community.splunk.com/t5/Deployment-Architecture/Problem-with-SAML-cert-quot-ERROR-UiSAML-Verification-of-SAML/m-p/322375#M12072  Not sure how to generate the certificate. Cert generated from Azure has only one stanza, not 3 as described. any leads would be helpful.
How do I run a search using ldapsearch which shows all members of a group, along with each member's UPNs? Currently, using LDAPGROUP (as shown below), we are only able to receive the basic CN for ea... See more...
How do I run a search using ldapsearch which shows all members of a group, along with each member's UPNs? Currently, using LDAPGROUP (as shown below), we are only able to receive the basic CN for each member. However, I want to see the UPN for each user. Any suggestion? Search: |ldapsearch basedn="ou=test,ou=Groups,ou=Common Resources,ou=group,dc=ad,dc=private" search="(&(objectClass=group)(cn=*))" | ldapgroup | table cn,member_dn,member_name
Hi Team, We got an requirement to create a report based on the accessed time present in the logs here in the logs the time is present with seconds, milliseconds, microseconds, nanoseconds value. ... See more...
Hi Team, We got an requirement to create a report based on the accessed time present in the logs here in the logs the time is present with seconds, milliseconds, microseconds, nanoseconds value. Example: 1s79ms874µs907ns So here in this case how to convert them into a unique value.  So post which we need to check and create a report for the same. In most of the cases the time is getting started with milliseconds and in few cases the time information is getting started with seconds. So how to convert the time (1s79ms874µs907ns) to an unique value either in seconds, milliseconds , microseconds or nanoseconds so then only we can able to create a report for the same. Or do we have any other option to fix this issue while searching the logs during runtime. So kindly help on my request. Sample Logs for Reference: DEBUG 2022-03-10 07:17:26,239 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: abcdefgh-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 145ms227µs975ns ago, IN_USE DEBUG 2022-03-10 07:07:26,239 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: ijklmnop-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 1s79ms874µs907ns ago, IN_USE DEBUG 2022-03-10 07:02:26,238 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: qrstuvwx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 7ms215µs946ns ago, IN_USE DEBUG 2022-03-10 06:57:26,237 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: qrstuvwx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 168ms259µs830ns ago, IN_USE DEBUG 2022-03-10 06:57:26,237 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: abcdefgh-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 6s993ms781µs523ns ago, IN_USE DEBUG 2022-03-10 06:47:26,238 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: ijklmnop-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 2ms593µs888ns ago, IN_USE DEBUG 2022-03-10 06:47:26,238 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: qrstuvwx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 55ms239µs616ns ago, IN_USE DEBUG 2022-03-10 06:47:26,238 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: abcdefgh-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 957ms778µs205ns ago, IN_USE DEBUG 2022-03-10 06:47:26,238 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: ijklmnop-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 45ms536µs884ns ago, IN_USE DEBUG 2022-03-10 06:47:26,238 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: qrstuvwx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 22ms906µs437ns ago, IN_USE DEBUG 2022-03-10 06:47:26,238 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: abcdefgh-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 46ms556µs466ns ago, IN_USE DEBUG 2022-03-10 06:42:26,236 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: ijklmnop-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 3s286ms410µs997ns ago, IN_USE DEBUG 2022-03-10 06:37:26,239 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: qrstuvwx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 842ms323µs432ns ago, IN_USE DEBUG 2022-03-10 06:27:26,236 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: abcdefgh-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 7ms698µs576ns ago, IN_USE DEBUG 2022-03-10 06:27:26,236 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: ijklmnop-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 18ms948µs359ns ago, IN_USE DEBUG 2022-03-10 06:17:26,236 [Timer-x] com.abc.valid.AppData - EntryData >>>>>>>>ConnectionID:xxxxx ClientConnectionId: qrstuvwx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, accessed 257ms32µs814ns ago, IN_USE  
Dear Splunk community, I have the following query:     index="myIndex" source="*mySource*" nameOfLog* "ExitCode: 0" | stats count by _time     Once a day a event is generated. So eit... See more...
Dear Splunk community, I have the following query:     index="myIndex" source="*mySource*" nameOfLog* "ExitCode: 0" | stats count by _time     Once a day a event is generated. So either it was generated (count = 1) or it was not (count = 0). I have a line diagram for the last 30 days that looks like this: On February 20th there was one event generated. On 23 February there was one event generated. On 21th and 22th of February, no events were generated. Therefore I expect the line to go down in the line chart like so: ------_------- This is not happening, and I am wondering why. How do I adjust this to show count=0 in the chart aswell? Thanks.
Scenario :123 [abc xyz  11111 ] I want to create a getter chain to collect the data in the middle of an array (xyz) where abc and xyz are dynamic . Could anyone help me to get the right getter cha... See more...
Scenario :123 [abc xyz  11111 ] I want to create a getter chain to collect the data in the middle of an array (xyz) where abc and xyz are dynamic . Could anyone help me to get the right getter chain to capture data in analytics. toString().split(123\\\ [).[1].xxx
We are trying to test out the new "cascading" option for the knowledge bundle replication policy. We have sat the replication policy on the search head to "cascading" in distsearch.conf, and have sat... See more...
We are trying to test out the new "cascading" option for the knowledge bundle replication policy. We have sat the replication policy on the search head to "cascading" in distsearch.conf, and have sat a new pass4SymmKey on the indexers for the cascading replication in server.conf, as described in the docs. https://docs.splunk.com/Documentation/Splunk/8.1.6/DistSearch/Cascadingknowledgebundlereplication However, in the Monitoring Console, on the page "Search > Knowledge Bundle Replication  > Cascading Replication", the indexers still says that the  replication policy is "classic", not "cascading". The search head correctly identifies as "cascading", but not the indexers. Dashboard: https://<monitoring-console-server>/en-GB/app/splunk_monitoring_console/cascading_replication How to get the Monitoring Console to correctly identify the knowledge bundle replication policy on the indexers as "cascading"? Is the check only looking for the setting "replicationPolicy = cascading"? If so, the indexers will always "fail" the check, as this setting is not applied on the indexers, as far as I understand.