All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How to apply props.conf EVENT_BREAKER on UF for better data distribution instead of using outputs.conf forceTimebasedAutoLB=true?
If i want to create a peernode , is iT mandatory to have a masternode, or is masternode optional. I learned how to create both, but i dont get if one nerds THE other
Hello Splunk Ninjas! I'm new to the group (and to the splunk) and will require your assistance with designing my regex expression. I need to filter for the value of Message in this sample log line:... See more...
Hello Splunk Ninjas! I'm new to the group (and to the splunk) and will require your assistance with designing my regex expression. I need to filter for the value of Message in this sample log line:   2022-09-23T13:20:25.765+01:00 [29] WARN Core.ErrorResponse - {} - Error message being sent to user with Http Status code: BadRequest: {"Message":"Sorry, only real values are valid in this environment.","UserMessage":null,"Code":64,"Explanation":null,"Resolution":null,"Category":3}   I will be interested in extracting value of Message, Code, Resolution and Category, Any help, much appreciated! Thanks again
I am pushing DNS logs to Splunk Cloud and I am noticing the QueryType is in numeric format, I would like to see that in string format Sample Log:   {"ColoID":378,"Datetime":"2022-09-23T23:55:23Z... See more...
I am pushing DNS logs to Splunk Cloud and I am noticing the QueryType is in numeric format, I would like to see that in string format Sample Log:   {"ColoID":378,"Datetime":"2022-09-23T23:55:23Z","DeviceID":"df34037e","DstIP":"xx.xx.xx.xx","DstPort":0,"Email":"non_identity@ec.com","Location":"London","Policy":"","PolicyID":"","Protocol":"https","QueryCategoryIDs":[26,81],"QueryName":"europe-west9-a-osconfig.googleapis.com","QueryNameReversed":"com.googleapis.europe-west9-a-osconfig","QuerySize":67,"QueryType":28,"RData":[{"type":"28","data":"F2V1cm9wZS13ZXN0OS1hLW9zY29uZmlnCmdvb2dsZWFwaXMDY29tAAAcAAEAAADdABAqABRQQAkIIAAAAAAAACAK"},{"type":"28","data":"F2V1cm9wZS13ZXN0OS1hLW9zY29uZmlnCmdvb2dsZWFwaXMDY29tAAAcAAEAAADdABAqABRQQAkIHwAAAAAAACAK"},{"type":"28","data":"F2V1cm9wZS13ZXN0OS1hLW9zY29uZmlnCmdvb2dsZWFwaXMDY29tAAAcAAEAAADdABAqABRQQAkIFQAAAAAAACAK"},{"type":"28","data":"F2V1cm9wZS13ZXN0OS1hLW9zY29uZmlnCmdvb2dsZWFwaXMDY29tAAAcAAEAAADdABAqABRQQAkIIQAAAAAAACAK"}],"ResolverDecision":"allowedOnNoPolicyMatch","SrcIP":"xx.xx.xx.xx","SrcPort":0,"UserID":"723f7"}     In the above log you would notice "QueryType":28, I'd like to replace 28 with a string - AAAA, other DNS query types can be found in https://en.wikipedia.org/wiki/List_of_DNS_record_types Is there a way I could replace or append the query types string instead of the numeric value that is showing up in the logs by using techniques like lookup or Join? Desired Log: (only QueryType is changed from 28 to AAAA)     {"ColoID":378,"Datetime":"2022-09-23T23:55:23Z","DeviceID":"df34037e","DstIP":"xx.xx.xx.xx","DstPort":0,"Email":"non_identity@ec.com","Location":"London","Policy":"","PolicyID":"","Protocol":"https","QueryCategoryIDs":[26,81],"QueryName":"europe-west9-a-osconfig.googleapis.com","QueryNameReversed":"com.googleapis.europe-west9-a-osconfig","QuerySize":67,"QueryType":AAAA,"RData":[{"type":"28","data":"F2V1cm9wZS13ZXN0OS1hLW9zY29uZmlnCmdvb2dsZWFwaXMDY29tAAAcAAEAAADdABAqABRQQAkIIAAAAAAAACAK"},{"type":"28","data":"F2V1cm9wZS13ZXN0OS1hLW9zY29uZmlnCmdvb2dsZWFwaXMDY29tAAAcAAEAAADdABAqABRQQAkIHwAAAAAAACAK"},{"type":"28","data":"F2V1cm9wZS13ZXN0OS1hLW9zY29uZmlnCmdvb2dsZWFwaXMDY29tAAAcAAEAAADdABAqABRQQAkIFQAAAAAAACAK"},{"type":"28","data":"F2V1cm9wZS13ZXN0OS1hLW9zY29uZmlnCmdvb2dsZWFwaXMDY29tAAAcAAEAAADdABAqABRQQAkIIQAAAAAAACAK"}],"ResolverDecision":"allowedOnNoPolicyMatch","SrcIP":"xx.xx.xx.xx","SrcPort":0,"UserID":"723f7"}   Thanks!
My sample logs is: 2022-09-12 34:45:12.456 info  Request uri [/asdff/aii/products] Request patameters [] Request payload [Request body size : : 5678 bytes Request body : : [{\activaterequest\:\ESRTY... See more...
My sample logs is: 2022-09-12 34:45:12.456 info  Request uri [/asdff/aii/products] Request patameters [] Request payload [Request body size : : 5678 bytes Request body : : [{\activaterequest\:\ESRTYBBS\*\*, \"addresslines\":[{\"addressLineOrder\":\"NAME\"linevalues\":[\"esmal interger\"]}], \"productsio\":\"IM630\", \"productjourneykey\":\"IM630-p-6789778\",\"lineValues\":[\"sejo guleim ramo versa"]}], \"statusdesc\":\"unknown protocol version. http header [x-aacs-rest-version]. Assuming current version [v1.0]\"}],[{ \number\"4\",\"storePONumber\":\"3456\*}, \"app\",\"message\":\"Action taken when more than 10 points\"}], :[{\"serverstatuscode\":\"400 bad_request\",\"severity\", \"statusdesc\":\"Action taken when more than 10 points\"}], \"number\"6\"] My query: index=axcf   "Action taken when more than 10 points" but i want the following values(productsio, addressLineOrder,  linevalues, storePONumber, message, serverstatuscode, statusdesc  ) in table format. how can i do this??
Hello, My goals is to send rrd file data to a splunk indexer. I have a remote host that currently forwards linux_secure data to the indexer - works fie. I am NEVER able to create an input for a... See more...
Hello, My goals is to send rrd file data to a splunk indexer. I have a remote host that currently forwards linux_secure data to the indexer - works fie. I am NEVER able to create an input for any port tcp or otherwise from this dialog window: When I configure a TCP forward-server using lthe UF the forward-server never goes active - I only get "cooked" data on the indexer. the host and source type are configured If I configure a port (tcp or udp) from here: this comes from Data/Data inputs/TCP This setting comes from Settings/Data/Forwarding and receiving I get data to the indexer.  I may be missing something. I installed collectd on a remote host, configured it for the csv plug in, and the cpu plugin -  this data is being collected and save to the /var/lib/collectd directory on the remote host. How can I get this data to splunk and graph it? I can see data coming in - but cannot do anything with it. The splunk web site says that the HEC inputs must be used to get metrics into splunk. How do I configure the remote host to do this? I.E. send the data from collectd to splunk, I am open to suggestions and clarification thanks eholz1  
Hello, I have started my splunk cloud trial and it is giving me below error while trying to access the link. Too many HTTP threads (1267) already running, try again later The server can not pre... See more...
Hello, I have started my splunk cloud trial and it is giving me below error while trying to access the link. Too many HTTP threads (1267) already running, try again later The server can not presently handle the given request.
I am trying to create a query that returns a table showing counts of different error codes and percentage of transactions that are failing (error != 0) for each service.  service 0 3100 ... See more...
I am trying to create a query that returns a table showing counts of different error codes and percentage of transactions that are failing (error != 0) for each service.  service 0 3100 2000 1200 % Failure Foo 1000 12 0 0 1.2% Bar 100 0 3 2 5.0%   My query which returns the above table is:  index=my_index | where error=0 OR error!=0 | chart count by service, error | eval "% Failure"  = round(('3100'+'2000'+'1200')/('3100'+'2000'+'1200'+'0'),2)."%"   How can I modify this query so that I don't need to hardcode each error code into the last part of the query, as error codes may vary?
tstats shows an error if I include a JSON field in "where" clause.  Same happens to CSV fields.  For example, if my source is like {"host": "<hostname>", "IP": "<IP address>"} and I do a search   ... See more...
tstats shows an error if I include a JSON field in "where" clause.  Same happens to CSV fields.  For example, if my source is like {"host": "<hostname>", "IP": "<IP address>"} and I do a search   | tstats count where IP = 10.0.0.1   Splunk displays "When used for 'tstats' searches, the 'WHERE' clause can contain only indexed fields. Ensure all fields in the 'WHERE' clause are indexed. Properly indexed fields should appear in fields.conf." The problem with fields.conf is that it doesn't deal with original data structure.  With JSON, there is always a chance that regex will not properly capture a field.  With CSV, failure rate is even higher. Is there some way to do tstats with structured sources? (I notice that despite the warning, tstats still performs OK.  But I'd rather users don't see such error message.)
How to I specify and earliest/latest search relative to the global time range selector. So if I choose 9/22/2022 in the global time range selector. I want my search to search from 2am to 3pm on... See more...
How to I specify and earliest/latest search relative to the global time range selector. So if I choose 9/22/2022 in the global time range selector. I want my search to search from 2am to 3pm on that day. When I specify earliest=@d+2h latests=@d+15h this completely overrides the global time selector and I get current time instead for the date from the global time range selector.
Has anyone worked with Splunk and any PDF document management software to enable the addition of Bates numbering and redaction for legal documents?    This is a unique use case and could be widel... See more...
Has anyone worked with Splunk and any PDF document management software to enable the addition of Bates numbering and redaction for legal documents?    This is a unique use case and could be widely utilized if successful.
Hello, I have a odd issue which seems to have been resolved but I would like to know the root cause of this issue. I inherited a splunk configuration with one of the stanza entries in inputs.conf ... See more...
Hello, I have a odd issue which seems to have been resolved but I would like to know the root cause of this issue. I inherited a splunk configuration with one of the stanza entries in inputs.conf being: [monitor:///var/log/messages*] sourcetype=syslog index = os disabled = 0 When I perform a ls -l on /var/log/messages* I get the below: -rw-------. 1 root root 7520499 Sep 23 07:15 messages -rw-------. 1 root root 4795535 Aug 28 01:45 messages-20220828 -rw-------. 1 root root 6636499 Sep 4 01:42 messages-20220904 ... When I do a spl search on any of the possible sources, since the stanza uses "*", I get no results except for the source=messages. I do not get results for the source=messages-20220828 (even if I extend the earliest=-365d). When the rsyslog executed and rotated the messages log file this past week, at about 2 am on saturday, splunk stopped indexing the messages log file. the messages log file kept being populated by linux so that side seems to be working as expected. the last log entry splunk recorded was: _time = 2022-09-18 01:46:40 _raw = Sep 18 01:46:40 ba-dev-web rsyslogd: [origin software="rsyslogd" swVersion="8.24.0-57.el7_9.3" x-pid="1899" x-info="http://www.rsyslog.com"] rsyslogd was HUPed I restarted the splunkforwarder on the server with the issue and this fix the issue and splunk started indexing the messages log entries again. To attempt to create a permanent solution to this issue because restarting the forwarder manually is not a adequate solution for this issue I created the below stanza: [monitor:///var/log/messages] index = test disabled = 0 I do not believe I need the "*" because 1) messages* sources are not being indexed by splunk, so why use "*". (only source=messages). 2) we do not need to index messages backup log files. When I came to work today, 18 hours after the "fix" (restart of splunk forwarder), my stanza is still working and indexing log entries as expected but the previous one: [monitor:///var/log/messages*] does not index log entries any more. I used the working one and determine that the last entries before splunk stopped indexing were: first column is _time and next column is _raw 2022-09-22 14:03:38 Sep 22 14:03:38 ba-prod-web audisp-remote: queue is full - dropping event 2022-09-22 14:03:38 Sep 22 14:03:38 ba-prod-web systemd: Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. 2022-09-22 14:03:38 Sep 22 14:03:38 ba-prod-web systemd: Stopping Systemd service file for Splunk, generated by 'splunk enable boot-start'... 2022-09-22 14:03:38 Sep 22 14:03:38 ba-prod-web splunk: Dying on signal #15 (si_code=0), sent by PID 1 (UID 0) 2022-09-22 14:03:38 Sep 22 14:03:38 ba-qa-web audisp-remote: queue is full - dropping event 2022-09-22 14:03:37 Sep 22 14:03:37 ba-qa-web audisp-remote: queue is full - dropping event 2022-09-22 14:03:36 Sep 22 14:03:36 ba-qa-web audisp-remote: queue is full - dropping event the last entry for the stanza that stopped working was: 2022-09-22 14:03:37 Sep 22 14:03:37 ba-qa-web audisp-remote: queue is full - dropping event all the other monitor an dscripted inputs are working on that server except for the one above. the version of the forwarder is 7.2.3. I am running other forwarders with this version that are indexing messages log entries and they are working as expected. the stanza I used was a copy and paste from the Splunk_TA_nix add-on (except I removed the other log files and just used messages), so IMO this would be the bbest practices". I have a few questions: 1. why might be the reason why the stanza with "*' not work anymore while the one without it works? 2. Am I correct to believe that we do not need the stanza with "*", what are the consequences that I might not be aware of not using a stanza with "*"? 3. why would uid 1 (root) kill splunk (believe this is the reason why splunk stopped indexing messages log files again the 2nd time)? 4. any insights to understand this issue would be greatly apreciated. As far as i know right now, using my stanza should be good practice if we do not need the backup messages log files but I am concern I am missing something.  
Is it possible to add text to a chart? We have a couple of color blind users, and as far as our developers know, there's no way to add text directly on top of the chart values. I find it hard to beli... See more...
Is it possible to add text to a chart? We have a couple of color blind users, and as far as our developers know, there's no way to add text directly on top of the chart values. I find it hard to believe that Splunk doesn't have this basic accessibility feature. Would appreciate any input on this.  Screenshots of what is currently happening vs what we would like to achieve. Thanks.   Current This is what we would like to achieve
HI I am trying to monitor logs on a server. I have a UF in it and am trying to ./splunk add monitor. When I put the path, index and so on. I keep getting this error " PARAMETER NAME: PATH MUST BE... See more...
HI I am trying to monitor logs on a server. I have a UF in it and am trying to ./splunk add monitor. When I put the path, index and so on. I keep getting this error " PARAMETER NAME: PATH MUST BE A FILE OR DIRECTORY" I have gone thru tons of questions on here but no one answer this particular question.  Thanks for your help.
we are using ocp-4.10 deploying splunk/splunk:7.2.2 image but pod is going into crashbakloopoff state and in logs we got this error sh: /opt/container_artifact/splunk-container.state: Permission deni... See more...
we are using ocp-4.10 deploying splunk/splunk:7.2.2 image but pod is going into crashbakloopoff state and in logs we got this error sh: /opt/container_artifact/splunk-container.state: Permission denied could u pls helpout from this issue ASAP
we are doing splunk integartion with ocp-4.10 so need to install splunk but After installation of splunk getting error in logs like sudo: unable to send audit message: Operation not permitted aft... See more...
we are doing splunk integartion with ocp-4.10 so need to install splunk but After installation of splunk getting error in logs like sudo: unable to send audit message: Operation not permitted after execution tasks we arre getting this error so can pls help us out from this issue ASAP
when i was studying about macro i sometimes see that we put our arguments between '      ' and sometimes between $    $ does anyone know which we have to use brackets or dollar signs
Hello All, I am kind of new into splunk. Need help to remove "ms" from the below data sets even before this data is being indexed so that i can use tstats command on these values to create faster s... See more...
Hello All, I am kind of new into splunk. Need help to remove "ms" from the below data sets even before this data is being indexed so that i can use tstats command on these values to create faster searches. Could anyone please help with this. Sample data: 2022-09-11 22:00:59,998 INFO -(Success:true)-(Validation:true)-(GUID:68D74EBE-CE3B-7508-6028-CBE1DFA90F8A)-(REQ_RCVD:2022-09-11T22:00:59.051)-(RES_SENT:2022-09-11T22:00:59.989)-(SIZE:2 KB)-(RespSent_TT:0ms)-(Actual_TT:938ms)-(DB_TT:9ms)-(Total_TT:947ms)-(AppServer_TT:937ms)   How do i remove the string "ms" after all the response times in this data.
 the transaction is identified as jsessionid .the spl query to find all transactions which lasted less than 5 sec : should i take : *|transaction jsessionid maxspan=5 or  *|transaction jsessi... See more...
 the transaction is identified as jsessionid .the spl query to find all transactions which lasted less than 5 sec : should i take : *|transaction jsessionid maxspan=5 or  *|transaction jsessionid timelimit=5 im finding it hard to see the different between them or we dont add maxspan or timelimit since it is less than 5 sec  
Hi, index=network sourcetype=cisco:asa NOT src_ip IN("10.0.0.0/8","10.0.0.1,"10.0.0.2") | bucket _time span=1m | stats dc(dest_port) as num_dest_port dc(dest_ip) as num_dest_ip by src_ip _time | ... See more...
Hi, index=network sourcetype=cisco:asa NOT src_ip IN("10.0.0.0/8","10.0.0.1,"10.0.0.2") | bucket _time span=1m | stats dc(dest_port) as num_dest_port dc(dest_ip) as num_dest_ip by src_ip _time | where num_dest_port > 500 OR num_dest_ip > 500 | eval Total=num_dest_port+num_dest_ip | sort -Total | dedup src_ip | With Lookups: index=network sourcetype=cisco:asa NOT src_ip IN("10.0.0.0/8","10.0.0.0/8","10.0.0.1,"10.0.0.2") | search NOT [|inputlookup Blocked_IP.csv] | fields src_ip | bucket _time span=1m | stats dc(dest_port) as num_dest_port dc(dest_ip) as num_dest_ip by src_ip _time | where num_dest_port > 500 OR num_dest_ip > 500 | eval Total=num_dest_port+num_dest_ip | sort -Total | dedup src_ip   I am not able to exclude the results from the Lookups or if I modify the search I'm not getting any results at all. Kindly help.