All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone, I'm monitoring my Splunk Enterprise instance and, by looking at splunkd logs both via cli and search through: index=_internal source="/opt/splunk/var/log/splunk/splunkd.log" log_l... See more...
Hello everyone, I'm monitoring my Splunk Enterprise instance and, by looking at splunkd logs both via cli and search through: index=_internal source="/opt/splunk/var/log/splunk/splunkd.log" log_level=ERROR   I find numerous SearchParser errors, namely the following one: ERROR SearchParser [20709 TcpChannelThread] - Missing a search command before '|'. Error at position '2' of search query '| |'.   How can I trace back to the search that generated such error (either the search or the sid is fine)? Is that "20709" something of interest in this scenario?
Hi Team,  Our clients are accidentally clicking the Run option  of saved searches and I can see duplicate events in summary index. I want to disable/remove the Run option from splunk reports/alerts... See more...
Hi Team,  Our clients are accidentally clicking the Run option  of saved searches and I can see duplicate events in summary index. I want to disable/remove the Run option from splunk reports/alerts for user specific. How can I achieve this? Please suggest      
Hi, I need to create a pie chart .... however, the query sets all BOTH categories the SAME colour.     How can I make them different? Thanks
Hi, Should I sent event logs to HttpEventCollector in GZip encoding or only in plain text ?
How many log events can be sent in one http POST command?  Is there a limit?  What is the limit size of the payload.
Good morning All, I am creating timechart, which has to show me top values, the query: When I try to click on particular value, like here: it directs me to all the events of that kind... See more...
Good morning All, I am creating timechart, which has to show me top values, the query: When I try to click on particular value, like here: it directs me to all the events of that kind, not only the max( ) values, which are presented at chart because of the line:  | timechart span=1d max(Priority_diffrence) by risk_object Do you have idea how to solve that? Any hints kindly welcome.
Hi all, I'm trying to create a new input for our created RestAPI-Call. As this call should only be executed once in a month (ticket-data) I struggle with the only available Interval-Option which is... See more...
Hi all, I'm trying to create a new input for our created RestAPI-Call. As this call should only be executed once in a month (ticket-data) I struggle with the only available Interval-Option which is seconds and not crontab-format. I further discovered that when setting interval in seconds having splunk restarted the counter of seconds starts from that point in time somehow that messes with a predicted input interval. Question: Is it possible to use cron-entry in underlaying input.conf file in created AoB-App? Thanks for any feedback Lothar
Hello, I am administrating a distributed environment with 1 Search Head and 10 peers. Something special is that communication is established via a satellite therefore the bandwidth is limited. Se... See more...
Hello, I am administrating a distributed environment with 1 Search Head and 10 peers. Something special is that communication is established via a satellite therefore the bandwidth is limited. Search Head has Splunk Enterprise Security installed and is a deployment server. Peers have the indexer role and all ingest Suricata IDS logs, while only one of them also ingests Windows Logs. I have measured that 3GB per day is the size of data exchanged between Search Head and Indexers, which seems quite a lot to me. Can someone please explain me what kind of data is transferred by default in a distributed environment? Some things to note: 1. Notable index and internal logs are stored locally in Search Head and not forwarded to peers. 2. Replication bundle is 16M Thank you in advance. With kind regards, Chris
I am new to Splunk, search query and return table values , I want change below table into second table format.  convert to table into below format. percentage calculation is sum of 0-5% - Q1 row val... See more...
I am new to Splunk, search query and return table values , I want change below table into second table format.  convert to table into below format. percentage calculation is sum of 0-5% - Q1 row value/ sum of column total. How can achieve this. please help me . Thanks in advance
Input: Message                                                          ID ... tt_1 ... tt_2 ... tt_9 ... tt_3                        1 ... tt_6 ... tt_4 ... tt_5                                ... See more...
Input: Message                                                          ID ... tt_1 ... tt_2 ... tt_9 ... tt_3                        1 ... tt_6 ... tt_4 ... tt_5                                      2   Ouput: Message                                                          ID     TT ... tt_1 ... tt_2 ... tt_9 ... tt_3                        1       tt_1 tt_2 tt_9 tt_3 ... tt_6 ... tt_4 ... tt_5                                      2.     tt_6 tt_4 tt_5 In the above "Message" field the "..." indicates some random text in between. So basically I want to extract all words starting with "tt_" and display it as in the table shown above. Can anyone help be with the splunk query of it.
Hi, As AppDynamics is a part of Cisco, what is the procedure for Cisco employees to request a fully functional AppDynamics account for lab and testing purpose?
Hi all, I am trying to build a query that only shows the NEW results compared to yesterday. I would like to get some alert and data to show ONLY if the message/key is new today, compared to the res... See more...
Hi all, I am trying to build a query that only shows the NEW results compared to yesterday. I would like to get some alert and data to show ONLY if the message/key is new today, compared to the results yesterday. for example:    {query} | stats count by key   Yesterday, the query returned - "key1", and "key2". | key    | count | | key1 | 10       | | key2 | 5        | Today, there are some results returned - "key1", and "key3". I would like to get the count of "key3" only as it is new today and didn't show up yesterday. | key    | count | | key3 | 15       | Thanks in advance!
Hi, im currently facing problem where splunk can detect all my files in directory but when doing searching, splunk cannot detect all of it? any ideas?    
Hello, What is the recommended way to create Apps from SPLUNK CLI, do you think > $SPLUNK_HOME/etc/apps/splunkdj createapp MyAppName should work? Your recommendation will be highly appreciated,... See more...
Hello, What is the recommended way to create Apps from SPLUNK CLI, do you think > $SPLUNK_HOME/etc/apps/splunkdj createapp MyAppName should work? Your recommendation will be highly appreciated, thank you.  
Hi, Has anyone tried using the node.js agent to see if it will work with detecting the Next.js framework? Next.js is an open-source web development framework built on top of Node.js, so don't k... See more...
Hi, Has anyone tried using the node.js agent to see if it will work with detecting the Next.js framework? Next.js is an open-source web development framework built on top of Node.js, so don't know if it will at least partially work.
In the iOS mobile app, the time range picker for all the dashboards is defaulting to 15 mins, instead of 'Today' as the web version. How to update the time range picker default value to show the same... See more...
In the iOS mobile app, the time range picker for all the dashboards is defaulting to 15 mins, instead of 'Today' as the web version. How to update the time range picker default value to show the same as the web?
How do I configure my AWS application (which is mostly lambda functions called by state machines) to properly propagate trace context? Right now I see traces that represent portions of my state machi... See more...
How do I configure my AWS application (which is mostly lambda functions called by state machines) to properly propagate trace context? Right now I see traces that represent portions of my state machines, but not the whole state machine Can splunk ingest the x-ray traceID generated by AWS Step Functions (if I turn on x-ray tracing for step function)? I am assuming that without that traceID being generated, splunk APM won't be able to track lambda functions across a state machine.
I have three total servers in a Windows deployment.  A Splunk Search server, a Splunk Index server and a Splunk deploy server.  I would like to upgrade the KV Store but I'm not clear if I need to use... See more...
I have three total servers in a Windows deployment.  A Splunk Search server, a Splunk Index server and a Splunk deploy server.  I would like to upgrade the KV Store but I'm not clear if I need to use the cluster implementation instructions or the single KV Store instructions.  I am currently running 8.2.6 and upgraded a few months ago from 8.0.x.  When I run the command "start-shcluster-migration kvstore -storageEngine wiredTiger -isDryRun true", I get "Admin handler 'shclustercaptainkvstoremigrate' not found".   This is Step1 in a clustered KV Store setup.  If I try the REST API option, for Step 1, the command errors out because -d is listed twice in the command: curl -k -u admin:changeme https://localhost:8089/services/shcluster/captain/kvmigrate/start -d storageEngine=wiredTiger -d isDryRun=true Also in the single deployment instructions, you edit the server.conf file: [kvstore] storageEngineMigration=true  I don't see this in the cluster implementation instructions unless I missed something.    
Hello Everyone , I am trying to solution a use case where team wants to review FIX messages and use that data for application observability.  After reading a the article (https://www.splunk.com/en_... See more...
Hello Everyone , I am trying to solution a use case where team wants to review FIX messages and use that data for application observability.  After reading a the article (https://www.splunk.com/en_us/blog/customers/splunk-in-financial-services.html?_gl=1*z1b7wa*_ga*MTkzMzA1MzUxOC4xNjA3MzY5MDc3*_gid*Nzc1NDIxOTMzLjE2NTQ1NDE4Njc.&_ga=2.37788645.775421933.1654541867-1933053518.1607369077) it looks like there as an app translatefix which could be used. However i'm not able to find the app in splunk base. Can someone please guide if the addon exist or is there a more feasible solution available.