All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I have requirement to add the lookup data into dashboard panels. Please could you review and help on this? how to add the lookup data into the spl query to display region fullname? SPl:   ind... See more...
Hi, I have requirement to add the lookup data into dashboard panels. Please could you review and help on this? how to add the lookup data into the spl query to display region fullname? SPl:   index=abc sourcetype=a.1 source=a.2  | search region IN (a,b,c,d,e,f,g,h,i,j,l,m) | chart count by region Lookup data: look file name regiondetails.csv Alias Name a america b brazil c canada d dubai
I would want to know how to view those deleted messages from the splunk bar?  Example, if i accidentally deleted a messages from the splunk bar, how can i view those messages again either from the we... See more...
I would want to know how to view those deleted messages from the splunk bar?  Example, if i accidentally deleted a messages from the splunk bar, how can i view those messages again either from the web ui or cli.
Hi in which app you are referring to? Is it on Splunkbase or your own? r. Ismo
Hi @AL3Z , if an app ins't compatible with Splunk Cloud it's very difficoult (or impossible) to make it compatible, probably there are forbidden scripts that block the compatibility. You should ide... See more...
Hi @AL3Z , if an app ins't compatible with Splunk Cloud it's very difficoult (or impossible) to make it compatible, probably there are forbidden scripts that block the compatibility. You should identify the script that blocks the upload and remove it, but probably these scripts are mandatory for the correct running of this app. But why do you need this app? Splunk Cloud has its own compatibility checks, so I' don't thnk that you need another compatibility app for this. Ciao. Giuseppe
Hi, How can we install the Splunk Enterprise Compatibility app on Splunk Cloud? Are there any modifications needed to ensure it's compatible with Splunk Cloud?
Usually _time is the event time. Then there is also _index_time which is tie when event is indexed. Usually when we are talking about lag it is _index_time - event time (_time). If/when your _time i... See more...
Usually _time is the event time. Then there is also _index_time which is tie when event is indexed. Usually when we are talking about lag it is _index_time - event time (_time). If/when your _time is something else than real event time, then you have some issues (usually several) on your onboarding process as @PickleRick said.
I mean it's not adding the src_user_idx field to the logs - the log files contain a `src_ip` field, so I expect to get a src_user_idx field to get populated, using this search:   index=pan_logs sou... See more...
I mean it's not adding the src_user_idx field to the logs - the log files contain a `src_ip` field, so I expect to get a src_user_idx field to get populated, using this search:   index=pan_logs sourcetype=pan:traffic earliest=-1m | fields src_ip,src_user*   I get src_ip but no src_user_idx. I did confirm the src_ip values are in fact in the lookup table
Something like index="my_index" [| inputlookup InfoSec-avLookup.csv | rename emailaddress AS msg.parsedAddresses.to{}] final_module="av" final_action="discard" | rename msg.parsedAddresses.to{} AS T... See more...
Something like index="my_index" [| inputlookup InfoSec-avLookup.csv | rename emailaddress AS msg.parsedAddresses.to{}] final_module="av" final_action="discard" | rename msg.parsedAddresses.to{} AS To, envelope.from AS From, msg.header.subject AS Subject, filter.modules.av.virusNames{} AS Virus_Type | eval Time=strftime(_time,"%H:%M:%S %m/%d/%y") | stats count, list(From) as From, list(Subject) as Subject, list(Time) as Time, list(Virus_Type) as Virus_Type by To | search [| inputlookup InfoSec-avLookup.csv | rename emailaddress AS To] | sort -count | table Time,To,From,Subject,Virus_Type | foreach Time From Subject Virus_Type [eval <<FIELD>> = mvindex(<<FIELD>>, 0, 4)]
Here are the screenshots: In incident review setting, I have already labeled signature: Then in Correlation Search content setting, also I have setting the search query which could result in fi... See more...
Here are the screenshots: In incident review setting, I have already labeled signature: Then in Correlation Search content setting, also I have setting the search query which could result in fields with signature. This search can be run normally in search head and show the result I want. But here related to drill-down search or description, this $signature$ can not show in notable of incident review:   May I ask how to solve this issue?
That makes sense, so is there any query or any way to find out how many MBs these search results are consuming on disk?
FYI as of 2024: This command would reach limit specified in limits.conf. As default, it would return 10,000 events, even if there's more than that. Instead, use: | sort 0 -_time This would return... See more...
FYI as of 2024: This command would reach limit specified in limits.conf. As default, it would return 10,000 events, even if there's more than that. Instead, use: | sort 0 -_time This would return the full result, although can impact performance.
Hi @avikc100  Basically in Splunk the time and date operations should be done like this: 1) Splunk has an event's timestamp in some format (dd-mm-yy aa:bb:cc dddd).  2) convert that to epoch times... See more...
Hi @avikc100  Basically in Splunk the time and date operations should be done like this: 1) Splunk has an event's timestamp in some format (dd-mm-yy aa:bb:cc dddd).  2) convert that to epoch timestamp (use strptime) ----- strptime(<str>, <format>) ------Takes a human readable time, represented by a string, and parses the time into a UNIX timestamp using the format you specify.  3) then do sorting, comparison operations on the epoch timestamp. 4) and then convert back to human readable timestamp (use strftime) ------strftime(<time>,<format>) ------This function takes a UNIX time value and renders the time as a string using the format specified.   if any reply helped you, then, karma  / upvotes  appreciated, thanks.   
>>> Basically we export from Airtable as a csv.  Just after getting the above CSV file, Could you pls try to upload to Splunk(without notepad++ tasks).  if the above CSV file upload fails, open the... See more...
>>> Basically we export from Airtable as a csv.  Just after getting the above CSV file, Could you pls try to upload to Splunk(without notepad++ tasks).  if the above CSV file upload fails, open the file in notepad(hoping that the file is not big enough) and inspect manually, if all looks good, maybe pls try to upload once again.  maybe could you copy paste a screenshot of the error(after editing/masking/hiding sensitive details), thanks. 
To ask a good question, you really want to tell people what is the desired output.  Illustrate with a table (anonymize as needed), not just code, not a screenshot with output that you think is wrong.... See more...
To ask a good question, you really want to tell people what is the desired output.  Illustrate with a table (anonymize as needed), not just code, not a screenshot with output that you think is wrong. (Screenshots are usually less useful anyway.) For example, agent.status.policy_refresh_at UpdateDate UpdateTime host 2024-01-04T10:31:35.529752Z ?? ??? blah Without your actual description, volunteers can speculate UpdateDate (per customary denotation) as 2024-01-04.  But what about UpdateTime?  Do you want 10:31:35.529752Z?  Do you want 10:31:35.529752?  Do you want 10:31:35.5 as your initial code would have suggested? (Why truncate to 10 characters?  Is there a desired precision?) You also want to let people know your intention with UpdateData and UpdateTime.  Are these for display only?  Do you intend to perform numerical comparison after this table is established?  If not, there is no benefit to convert agent.status.policy_refresh_at to epoch value. If you want UpdateTime to include time zone (the trailing "Z" is a valid timezone, not an idle letter), this should suffice   index = xyz | eval agent.status.policy_refresh_at = split('agent.status.policy_refresh_at', "T") | eval UpdateDate = mvindex('agent.status.policy_refresh_at', 0) | eval UpdateTime = mvindex('agent.status.policy_refresh_at', 1)   Your sample data will give UpdateDate UpdateTime agent.status.policy_refresh_at host 2024-01-04 10:31:35.529752Z 2024-01-04 10:31:35.529752Z CN**** 2024-01-04 10:31:51.654448Z 2024-01-04 10:31:51.654448Z CN**** 2023-11-26 05:57:47.775675Z 2023-11-26 05:57:47.775675Z gb**** 2024-01-04 10:32:14.416359Z 2024-01-04 10:32:14.416359Z cn**** 2024-01-04 10:30:32.998086Z 2024-01-04 10:30:32.998086Z cn**** If you do not wish timezone to be included (not sure why that is desirable), you can strip it, like   index = xyz | eval agent.status.policy_refresh_at = split('agent.status.policy_refresh_at', "T") | eval UpdateDate = mvindex('agent.status.policy_refresh_at', 0) | eval UpdateTime = replace(mvindex('agent.status.policy_refresh_at', 1), "\D$", "")   If you want to control precision, you can also limit number of decimals, etc. Here is an emulation you can play with and compare with real data   | makeresults format=csv data="agent.status.policy_refresh_at,host 2024-01-04T10:31:35.529752Z,CN**** 2024-01-04T10:31:51.654448Z,CN**** 2023-11-26T05:57:47.775675Z,gb**** 2024-01-04T10:32:14.416359Z,cn**** 2024-01-04T10:30:32.998086Z,cn****" ``` data emulation above, equivalent to index = xyz ```    
Something like this should sort your column in the intended order with the time format requested.         <base_search> ``` bucket _time into each respective day ``` | bucket span=1d _ti... See more...
Something like this should sort your column in the intended order with the time format requested.         <base_search> ``` bucket _time into each respective day ``` | bucket span=1d _time ``` transform data in a normal Splunk friendly timeseries format ``` | chart count as count over _time by DFOINTERFACE ``` ensure ascending order of _time field ``` | sort 0 +_time ``` format timestamp as desired ``` | eval timestamp=strftime(_time, "%m/%d/%Y") ``` remove _time field (no longer needed) ``` | fields - _time ``` transpose table (this should retain the sort order of date ``` ``` note: transpose has default limits on number of columns that will display. The 25 here is saying allow at the most 25 columns to be available before truncation occurs. ``` | transpose 25 header_field=timestamp column_name=DFOINTERFACE         Example output: This is basically the same question that was asked here. https://community.splunk.com/t5/Splunk-Search/how-to-report-based-on-date/m-p/673054
There is no good way to sort column using mm/dd/yyyy format.  What's wrong with yyyy-mm-dd? index="*" source="*" |eval timestamp=strftime(_time, "%F") | chart limit=30 count as count over DFOINTERFA... See more...
There is no good way to sort column using mm/dd/yyyy format.  What's wrong with yyyy-mm-dd? index="*" source="*" |eval timestamp=strftime(_time, "%F") | chart limit=30 count as count over DFOINTERFACE by timestamp
Thank you. For Splunk, would you say it is currently impossible to be able to show/hide the individual fields?* No alternative workarounds? (*To clarify, mimicking the behavior of the showDataLabels... See more...
Thank you. For Splunk, would you say it is currently impossible to be able to show/hide the individual fields?* No alternative workarounds? (*To clarify, mimicking the behavior of the showDataLabels setting for the individual fields.)   
We have a sandbox environment  with vpsphere and it works mostly just fine we believe the time sync is corect because we have it set to use internet to auto update and for the sake or being free of ... See more...
We have a sandbox environment  with vpsphere and it works mostly just fine we believe the time sync is corect because we have it set to use internet to auto update and for the sake or being free of errors we have disabled firewalld. (this is a  mostly linux env) howerever we are getting the following erorrs see attached
this query showing date &time haphazardly, how to sort it like 1/4/2024, 1/3/2024, 1/2/2024.... index="*" source="*" |eval timestamp=strftime(_time, "%m/%d/%Y") | chart limit=30 count as count ... See more...
this query showing date &time haphazardly, how to sort it like 1/4/2024, 1/3/2024, 1/2/2024.... index="*" source="*" |eval timestamp=strftime(_time, "%m/%d/%Y") | chart limit=30 count as count over DFOINTERFACE by timestamp    
The showDataLabels setting is all or none.  It cannot be set for individual fields so it would set like this <option name="charting.chart.showDataLabels">all</option>