All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi everyone,  I've found in the history similar questions and answer like "Search modes are just for the UI". But doesn't it really matter? So if I create and tune the search in the Verbose mode how... See more...
Hi everyone,  I've found in the history similar questions and answer like "Search modes are just for the UI". But doesn't it really matter? So if I create and tune the search in the Verbose mode how exactly will Splunk run the query? What kind of "search mode"?  I can't find that answer anywhere even in the docu page ( https://docs.splunk.com/Documentation/Splunk/9.0.1/Search/Changethesearchmode ) Thanks a lot for some details.
I'm an end user! It appears to be just my user account. we dont seem to be able to find the answer When I do any search (such as index="med") I get  "Error in 'litsearch' command: Unable to parse ... See more...
I'm an end user! It appears to be just my user account. we dont seem to be able to find the answer When I do any search (such as index="med") I get  "Error in 'litsearch' command: Unable to parse the search: unbalanced parentheses." When I go through the logs I was surprised to see that such a simple search resulted in litsearch (index="med" index=nessus ((source="SI - EZproxy" orig_sourcetype="nessus:scan") OR sourcetype="nessus:scan") | lookup Device_Details nt_host as host-fqdn output bunit | search bunit="Medicine") | litsearch (index="med" index=nessus sourcetype=nessus:scan | lookup Device_Details nt_host as host-fqdn output bunit | search bunit="Medicine") | fields  keepcolorder=t "*" "_bkt" "_cd" "_si" "host" "index" "linecount" "source" "sourcetype" "splunk_server"  | remotetl  nb=300 et=1660905790.000000 lt=1660906690.000000 remove=true max_count=1000 max_prefetch=100 While the parenthesis balance, I read somewhere they they have to balance within the pipe (|), which they don't.  We do indeed have a nessus index and several months ago someone started work on getting nessus reporting dashboard in splunk to work (still ongoing). However I am not sure why a simple search on index=Med would reference "nessus".  Does the litsearch command look wrong? Where is it picking up the conf to produce such a command and can it be fixed? I have tried to create a table view of  "med" and I get no entries rather than an error. I did that because it would be good to see the index to know its not a permission error.  
Filed name = pluginText <plugin_output>Information about this scan : Nessus version : 10.3.0 Nessus build : 20080 Plugin feed version : 202208222232 Scanner edition used : Nessus Scanner O... See more...
Filed name = pluginText <plugin_output>Information about this scan : Nessus version : 10.3.0 Nessus build : 20080 Plugin feed version : 202208222232 Scanner edition used : Nessus Scanner OS : LINUX Scanner distribution : es7-x86-64 Scan type : Normal Scan name : Host_Discovery & OS_Identification Scan policy used : 93e1da98-656c-5cd5-933b-ce6665fc0486-1939724/Host_Discovery_Scan_03292022 Scanner IP : 10.102.10.1 Port scanner(s) : nessus_syn_scanner Port range : sc-default Ping RTT : 11.921 ms Thorough tests : no Experimental tests : no Plugin debugging enabled : no Paranoia level : 1 Report verbosity : 1 Safe checks : yes Optimize the test : yes Credentialed checks : no Patch management checks : None Display superseded patches : yes (supersedence plugin launched) CGI scanning : disabled Web application tests : disabled Max hosts : 30 Max checks : 5 Recv timeout : 5 Backports : None Allow post-scan editing : Yes Scan Start Date : 2021/8/10 1:55 UTC can duration : 63 sec </plugin_output>
I am in the process of mapping our use-cases that we have within Splunk / Enterprise Security to Mitre, and am also trying to organize them a bit. I'm using Splunk Security Essentials 3.6 and have ... See more...
I am in the process of mapping our use-cases that we have within Splunk / Enterprise Security to Mitre, and am also trying to organize them a bit. I'm using Splunk Security Essentials 3.6 and have a question concerning Any Splunk Logs. On the Content Introspection screen - some of my use-cases are organized into different categories such as AWS, Application Load Balance, Authentication, Anti-virus etc. However, a large percentage of my content just appears under the Any Splunk Logs heading - how can I change this?? I even went back to the Data Inventory screen... and manually defined some of the indexes and sourcetypes to other categories, but nothing has changed.   Help!!
I can't figure out the correct syntax for the second eval statement or what else I should use instead of eval. I know the second eval statement syntax is incorrect, I am just placing it here so you c... See more...
I can't figure out the correct syntax for the second eval statement or what else I should use instead of eval. I know the second eval statement syntax is incorrect, I am just placing it here so you can understand what I am trying to accomplish. | eval FieldA=if(like(computername, "ABC%"), "Yes", "No") | eval FieldB = if FieldA="No", then FieldB = FieldC, else FieldB = FieldA Thank you!
How can I display the subsearch_scheduler. index=_internal [ inputlookup splunk-servers | search splunk-component="Search Head" | fields host] source=/opt/ovz/splunk/var/log/splunk/scheduler.log... See more...
How can I display the subsearch_scheduler. index=_internal [ inputlookup splunk-servers | search splunk-component="Search Head" | fields host] source=/opt/ovz/splunk/var/log/splunk/scheduler.log [search index=_internal [ inputlookup splunk-servers | search splunk-component="Search Head" | fields host] log_level=ERROR component=SearchMessages sid=subsearch_scheduler* | table sid | dedup sid] | stats count values(savedsearch_name) dc(savedsearch_name) by user | sort - count
I have the record like this:     _time  id status  1        x     yes 1         x     no 2          x      yes 1          x      unknow    I want to return the record based on status ... See more...
I have the record like this:     _time  id status  1        x     yes 1         x     no 2          x      yes 1          x      unknow    I want to return the record based on status value: if status has yes ,then return the lasted row that has yes. if there is none yes value then I want the row with no,  if there is none yes or none no, return unknow row.
Disclaimer - Fairly New to Splunk I'm stuck on building a table for a dashboard. I would like to list a table of Computer Names with columns displaying the last 5min average values for CPU% / Mem... See more...
Disclaimer - Fairly New to Splunk I'm stuck on building a table for a dashboard. I would like to list a table of Computer Names with columns displaying the last 5min average values for CPU% / Mem% / DiskTransfers / etc The search is  index=azure sourcetype="mscs:azure:eventhub:vmmetrics" body.Computer=* body.ObjectName="Processor" | stats first(body.CounterValue) by body.Computer That gives me the last Processor value for each Computer. (I cant do 5min average - that can be a bonus point answer !) How would I add the same search into the table but with replacing the body.ObjectName field value for body.ObjectName="Memory"  and then  body.ObjectName="DiskTransfers"  and then combine that into one table . Thanks for helping
Hello, Here is my data! Basically everything is in the same table, however I separated to better explain my problem!  number customer ref 1 1 A ... See more...
Hello, Here is my data! Basically everything is in the same table, however I separated to better explain my problem!  number customer ref 1 1 A 2 1 B 3 1 C 4 2 D   number customer ref 2-1 X B 2-2 X B 3-1 X C   I would like a table that groups all the data as you can see here, with an order as you can see made by a correlation on the "ref" : number customer ref 1 1 A 2 1 B 2-1 X B 2-2 X B 3 1 C 3-1 X C 4 2 D   I have already tried many things, the map, the append, the foreach, etc.  Do you have any ideas? Thank you
When you run the following  https://<IP Address of Splunk instance>:<PortNumber>/en-US/debug/refresh  What exactly do you refresh? E.g. Indexes.conf, reading for new applications installed into... See more...
When you run the following  https://<IP Address of Splunk instance>:<PortNumber>/en-US/debug/refresh  What exactly do you refresh? E.g. Indexes.conf, reading for new applications installed into splunk? Is there a page where i can reference as to what it does, and a list of what it refreshes. Thank you in advance for any help provided.
Hi All, I'm totally new to Splunk. Please let know if any can explain what are the below searchhead, in perspective of installing an app.  1- AdHocSH 2-Premium SH 3-SH Cluster 4-IDM    
We are using Splunk Enterprise Ver.:8.2.3 and currently solving a Issue with displaying Line chart. The Use of the "search" and visualizing it into a line chart works properly with time range f... See more...
We are using Splunk Enterprise Ver.:8.2.3 and currently solving a Issue with displaying Line chart. The Use of the "search" and visualizing it into a line chart works properly with time range from 20:00 to 20:30 with span=1m but not with time range from 20:00 to 21:00 and span=1m time range 20:00-20:30   line charts looks ok with time range 20:00-20:30   in dashboard looks line chart ok as wellwith time range 20:00-20:30   but in this case with time range 20:00-21:00 in dashboard the line chart is incomplete, there is 13 mins a gap with 0  here you cans see 13mins gap and we don't know why, because we have events to display. events with time range 20:00-21:00 looks ok  The search is working properly here but not in dashboard. Can you explain me this weird behavior and where is the Issue please? Thank you in advance      
Hi All, Please let me know if anyone can help me with this query. I need a query to find total number of Network devices reporting to the indexer for any specified time range
To try something new, I decided to try my hand at creating one of my new dashboards in Dashboard Studio. I'm running into an issue that I can't seem to overcome. I have a table which is using an SP... See more...
To try something new, I decided to try my hand at creating one of my new dashboards in Dashboard Studio. I'm running into an issue that I can't seem to overcome. I have a table which is using an SPL as a data source. One of the fields is "zip code" . When viewing the SPL in search, the zip code displays as intended (i.e. 5 characters, with leading zeros in tact).  However, when I add it to the table object in DS, it detects that column as a number which in turn cuts off any leading zeros. I have tried everything I can find in the documentation regarding column formatting, but can't seem to convert it back to string. Any ideas on how to either prevent the table from detecting it as a number, or how to convert it back to string once it's loaded in?
Every time i run a specific SPL query( can not reveal  due to some security process ). I am getting the below error. Currently displaying the recent 1000 events in the select range .Select a narr... See more...
Every time i run a specific SPL query( can not reveal  due to some security process ). I am getting the below error. Currently displaying the recent 1000 events in the select range .Select a narrow range or zoom in to see more events.   Please help me resolve this issue. Please provide  splunk documentation if any.
Hi, When creating a health rule, use data from last ----- mins (1 to 120 mins), Is this the rolling time window ? Like every time it calculates the last X minutes ?  Is there any way to choose the ... See more...
Hi, When creating a health rule, use data from last ----- mins (1 to 120 mins), Is this the rolling time window ? Like every time it calculates the last X minutes ?  Is there any way to choose the static timeframe ? Means, Choose the first 5 mins of data for evaluation, the next time choose the next 5 mins of time for evaluation.  Thanks,  Viji
Hi,   How to suppress the notable events in Splunk itsi ? And when an episode breaks will the related notable events gets cleared?  And when an new episode gets created the related notable ev... See more...
Hi,   How to suppress the notable events in Splunk itsi ? And when an episode breaks will the related notable events gets cleared?  And when an new episode gets created the related notable events count will be a fresh count from the time of episode creation or it will be a accumulated from the previous count. Please clarify. Thanks!
i have this dropdown which produces correct results:       <input type="dropdown" token="tUser" searchWhenChanged="true"> <label>User Name</label> <choice value="*">All<... See more...
i have this dropdown which produces correct results:       <input type="dropdown" token="tUser" searchWhenChanged="true"> <label>User Name</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>numUsername</fieldForLabel> <fieldForValue>username</fieldForValue> <search> <query>index=tvlog | stats count AS "Quantity" by username | strcat username " (" Quantity ")" numUsername </query> <earliest>$tokEarliestTime$</earliest> <latest>$tokLatestTime$</latest> </search> </input>       there's, among other's, one user named "Support Sul" and an additional user named "Support SuL 2". both show up in the dropdown with the correct number of connections (Quantity). BUT when i select  "Support SuL" from the dropdown, the resulting table contains both users. even worse: when i select "Support SuL 2", i get all "Support SuL 2" users and some "Support SuL" users. this is the table:         <table> <search> <query>index=tvlog $tUser$ | table start_date, end_date, duration, username, devicename | sort start_date desc | rename start_date as "Start Date" | rename end_date as "End Date" | rename username as "User Name" | rename devicename as "Device Name" </query> <earliest>$tokEarliestTime$</earliest> <latest>$tokLatestTime$</latest> </search> <option name="count">20</option> <option name="drilldown">none</option> </table>          the source file is a simple utf-8 encoded csv. what's wrong here?
Hi fellas, How can we fetch details of a playbook like action_run_id, playbook_run_id and status. We need to monitor health of a playbook with those data. If anyone have any ideas please help me out.
Hi All I have configured Splunk_TA_vmware along with SA_Hydra in our HF to collect data from vcenter. I have also installed VMWIndex add-on on Indexer clusters as suggested in the documentation. H... See more...
Hi All I have configured Splunk_TA_vmware along with SA_Hydra in our HF to collect data from vcenter. I have also installed VMWIndex add-on on Indexer clusters as suggested in the documentation. However the data is going to lastchance index when I was hoping the VMWIndex add-on would take care of the proper index configuration.  Is there any additional configuration I need to do to get the logs into the indexes created by VMWIndex addon. Attaching the indexes.conf file from the addon. Tried adding index=index_name in the inputs.conf of Splunk_TA_vmware addon, but no luck. It is not getting any effect and still going into lastchance index only. Kindly suggest.