All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I mean it's not adding the src_user_idx field to the logs - the log files contain a `src_ip` field, so I expect to get a src_user_idx field to get populated, using this search:   index=pan_logs sou... See more...
I mean it's not adding the src_user_idx field to the logs - the log files contain a `src_ip` field, so I expect to get a src_user_idx field to get populated, using this search:   index=pan_logs sourcetype=pan:traffic earliest=-1m | fields src_ip,src_user*   I get src_ip but no src_user_idx. I did confirm the src_ip values are in fact in the lookup table
Something like index="my_index" [| inputlookup InfoSec-avLookup.csv | rename emailaddress AS msg.parsedAddresses.to{}] final_module="av" final_action="discard" | rename msg.parsedAddresses.to{} AS T... See more...
Something like index="my_index" [| inputlookup InfoSec-avLookup.csv | rename emailaddress AS msg.parsedAddresses.to{}] final_module="av" final_action="discard" | rename msg.parsedAddresses.to{} AS To, envelope.from AS From, msg.header.subject AS Subject, filter.modules.av.virusNames{} AS Virus_Type | eval Time=strftime(_time,"%H:%M:%S %m/%d/%y") | stats count, list(From) as From, list(Subject) as Subject, list(Time) as Time, list(Virus_Type) as Virus_Type by To | search [| inputlookup InfoSec-avLookup.csv | rename emailaddress AS To] | sort -count | table Time,To,From,Subject,Virus_Type | foreach Time From Subject Virus_Type [eval <<FIELD>> = mvindex(<<FIELD>>, 0, 4)]
Here are the screenshots: In incident review setting, I have already labeled signature: Then in Correlation Search content setting, also I have setting the search query which could result in fi... See more...
Here are the screenshots: In incident review setting, I have already labeled signature: Then in Correlation Search content setting, also I have setting the search query which could result in fields with signature. This search can be run normally in search head and show the result I want. But here related to drill-down search or description, this $signature$ can not show in notable of incident review:   May I ask how to solve this issue?
That makes sense, so is there any query or any way to find out how many MBs these search results are consuming on disk?
FYI as of 2024: This command would reach limit specified in limits.conf. As default, it would return 10,000 events, even if there's more than that. Instead, use: | sort 0 -_time This would return... See more...
FYI as of 2024: This command would reach limit specified in limits.conf. As default, it would return 10,000 events, even if there's more than that. Instead, use: | sort 0 -_time This would return the full result, although can impact performance.
Hi @avikc100  Basically in Splunk the time and date operations should be done like this: 1) Splunk has an event's timestamp in some format (dd-mm-yy aa:bb:cc dddd).  2) convert that to epoch times... See more...
Hi @avikc100  Basically in Splunk the time and date operations should be done like this: 1) Splunk has an event's timestamp in some format (dd-mm-yy aa:bb:cc dddd).  2) convert that to epoch timestamp (use strptime) ----- strptime(<str>, <format>) ------Takes a human readable time, represented by a string, and parses the time into a UNIX timestamp using the format you specify.  3) then do sorting, comparison operations on the epoch timestamp. 4) and then convert back to human readable timestamp (use strftime) ------strftime(<time>,<format>) ------This function takes a UNIX time value and renders the time as a string using the format specified.   if any reply helped you, then, karma  / upvotes  appreciated, thanks.   
>>> Basically we export from Airtable as a csv.  Just after getting the above CSV file, Could you pls try to upload to Splunk(without notepad++ tasks).  if the above CSV file upload fails, open the... See more...
>>> Basically we export from Airtable as a csv.  Just after getting the above CSV file, Could you pls try to upload to Splunk(without notepad++ tasks).  if the above CSV file upload fails, open the file in notepad(hoping that the file is not big enough) and inspect manually, if all looks good, maybe pls try to upload once again.  maybe could you copy paste a screenshot of the error(after editing/masking/hiding sensitive details), thanks. 
To ask a good question, you really want to tell people what is the desired output.  Illustrate with a table (anonymize as needed), not just code, not a screenshot with output that you think is wrong.... See more...
To ask a good question, you really want to tell people what is the desired output.  Illustrate with a table (anonymize as needed), not just code, not a screenshot with output that you think is wrong. (Screenshots are usually less useful anyway.) For example, agent.status.policy_refresh_at UpdateDate UpdateTime host 2024-01-04T10:31:35.529752Z ?? ??? blah Without your actual description, volunteers can speculate UpdateDate (per customary denotation) as 2024-01-04.  But what about UpdateTime?  Do you want 10:31:35.529752Z?  Do you want 10:31:35.529752?  Do you want 10:31:35.5 as your initial code would have suggested? (Why truncate to 10 characters?  Is there a desired precision?) You also want to let people know your intention with UpdateData and UpdateTime.  Are these for display only?  Do you intend to perform numerical comparison after this table is established?  If not, there is no benefit to convert agent.status.policy_refresh_at to epoch value. If you want UpdateTime to include time zone (the trailing "Z" is a valid timezone, not an idle letter), this should suffice   index = xyz | eval agent.status.policy_refresh_at = split('agent.status.policy_refresh_at', "T") | eval UpdateDate = mvindex('agent.status.policy_refresh_at', 0) | eval UpdateTime = mvindex('agent.status.policy_refresh_at', 1)   Your sample data will give UpdateDate UpdateTime agent.status.policy_refresh_at host 2024-01-04 10:31:35.529752Z 2024-01-04 10:31:35.529752Z CN**** 2024-01-04 10:31:51.654448Z 2024-01-04 10:31:51.654448Z CN**** 2023-11-26 05:57:47.775675Z 2023-11-26 05:57:47.775675Z gb**** 2024-01-04 10:32:14.416359Z 2024-01-04 10:32:14.416359Z cn**** 2024-01-04 10:30:32.998086Z 2024-01-04 10:30:32.998086Z cn**** If you do not wish timezone to be included (not sure why that is desirable), you can strip it, like   index = xyz | eval agent.status.policy_refresh_at = split('agent.status.policy_refresh_at', "T") | eval UpdateDate = mvindex('agent.status.policy_refresh_at', 0) | eval UpdateTime = replace(mvindex('agent.status.policy_refresh_at', 1), "\D$", "")   If you want to control precision, you can also limit number of decimals, etc. Here is an emulation you can play with and compare with real data   | makeresults format=csv data="agent.status.policy_refresh_at,host 2024-01-04T10:31:35.529752Z,CN**** 2024-01-04T10:31:51.654448Z,CN**** 2023-11-26T05:57:47.775675Z,gb**** 2024-01-04T10:32:14.416359Z,cn**** 2024-01-04T10:30:32.998086Z,cn****" ``` data emulation above, equivalent to index = xyz ```    
Something like this should sort your column in the intended order with the time format requested.         <base_search> ``` bucket _time into each respective day ``` | bucket span=1d _ti... See more...
Something like this should sort your column in the intended order with the time format requested.         <base_search> ``` bucket _time into each respective day ``` | bucket span=1d _time ``` transform data in a normal Splunk friendly timeseries format ``` | chart count as count over _time by DFOINTERFACE ``` ensure ascending order of _time field ``` | sort 0 +_time ``` format timestamp as desired ``` | eval timestamp=strftime(_time, "%m/%d/%Y") ``` remove _time field (no longer needed) ``` | fields - _time ``` transpose table (this should retain the sort order of date ``` ``` note: transpose has default limits on number of columns that will display. The 25 here is saying allow at the most 25 columns to be available before truncation occurs. ``` | transpose 25 header_field=timestamp column_name=DFOINTERFACE         Example output: This is basically the same question that was asked here. https://community.splunk.com/t5/Splunk-Search/how-to-report-based-on-date/m-p/673054
There is no good way to sort column using mm/dd/yyyy format.  What's wrong with yyyy-mm-dd? index="*" source="*" |eval timestamp=strftime(_time, "%F") | chart limit=30 count as count over DFOINTERFA... See more...
There is no good way to sort column using mm/dd/yyyy format.  What's wrong with yyyy-mm-dd? index="*" source="*" |eval timestamp=strftime(_time, "%F") | chart limit=30 count as count over DFOINTERFACE by timestamp
Thank you. For Splunk, would you say it is currently impossible to be able to show/hide the individual fields?* No alternative workarounds? (*To clarify, mimicking the behavior of the showDataLabels... See more...
Thank you. For Splunk, would you say it is currently impossible to be able to show/hide the individual fields?* No alternative workarounds? (*To clarify, mimicking the behavior of the showDataLabels setting for the individual fields.)   
We have a sandbox environment  with vpsphere and it works mostly just fine we believe the time sync is corect because we have it set to use internet to auto update and for the sake or being free of ... See more...
We have a sandbox environment  with vpsphere and it works mostly just fine we believe the time sync is corect because we have it set to use internet to auto update and for the sake or being free of errors we have disabled firewalld. (this is a  mostly linux env) howerever we are getting the following erorrs see attached
this query showing date &time haphazardly, how to sort it like 1/4/2024, 1/3/2024, 1/2/2024.... index="*" source="*" |eval timestamp=strftime(_time, "%m/%d/%Y") | chart limit=30 count as count ... See more...
this query showing date &time haphazardly, how to sort it like 1/4/2024, 1/3/2024, 1/2/2024.... index="*" source="*" |eval timestamp=strftime(_time, "%m/%d/%Y") | chart limit=30 count as count over DFOINTERFACE by timestamp    
The showDataLabels setting is all or none.  It cannot be set for individual fields so it would set like this <option name="charting.chart.showDataLabels">all</option>  
Is the CSV file well-formatted?  Missing quotes or unescaped embedded quotes (or commas) may affect how the file is loaded.
Per documentation: https://docs.splunk.com/Documentation/Splunk/9.1.2/Viz/ChartConfigurationReference The property charting.chart.showDataLabels only allow the type (all | minmax | none). I am atte... See more...
Per documentation: https://docs.splunk.com/Documentation/Splunk/9.1.2/Viz/ChartConfigurationReference The property charting.chart.showDataLabels only allow the type (all | minmax | none). I am attempting to hide data labels for a specific field, but enable data labels for other specified fields. I am attempting to do something similar to charting.fieldColors which uses maps, but the types are obviously not accepted for the showDataLabels property:   <option name="charting.chart.showDataLabels"> {"field1":none, "field2":all} </option>   Is there a workaround possible for this objective?
Basically we export from Airtable as a csv.  I changed in notepad++,  view->show symbols-> show all characters, and  edit->EOL Conversion->Windows Format. but that doesn't work.
I have a report that lists malware received by email that is part of a dashboard. Some months the list for each person can have dozens of events listed. Management would like to only show the latest ... See more...
I have a report that lists malware received by email that is part of a dashboard. Some months the list for each person can have dozens of events listed. Management would like to only show the latest 5 events for each person. I'm having difficulty finding a good way to accomplish this. Search: index="my_index" [| inputlookup InfoSec-avLookup.csv | rename emailaddress AS msg.parsedAddresses.to{}] final_module="av" final_action="discard" | rename msg.parsedAddresses.to{} AS To, envelope.from AS From, msg.header.subject AS Subject, filter.modules.av.virusNames{} AS Virus_Type | eval Time=strftime(_time,"%H:%M:%S %m/%d/%y") | stats count, list(From) as From, list(Subject) as Subject, list(Time) as Time, list(Virus_Type) as Virus_Type by To | search [| inputlookup InfoSec-avLookup.csv | rename emailaddress AS To] | sort -count | table Time,To,From,Subject,Virus_Type | head 5 Current Output: time - user1 - sender1@xyz.com - Subject1 - Virus_A time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_C time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time - user2 - sender1@xyz.com - Subject1 - Virus_A time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_C time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time - user3 - sender1@xyz.com - Subject1 - Virus_A time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_C time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B I'd like to limit it to the latest 5 events by user time - user1 - sender1@xyz.com - Subject1 - Virus_A time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_C time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time - user2 - sender1@xyz.com - Subject1 - Virus_A time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_C time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time - user3 - sender1@xyz.com - Subject1 - Virus_A time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_C time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B Any help greatly appreciated! Thank you!  
Solved.  My deployment app's outputs.conf file was using the wrong IP address of the Splunk indexer.  Some IP changes were made that I wasn't aware of and didn't notice it until now.  Once I updated ... See more...
Solved.  My deployment app's outputs.conf file was using the wrong IP address of the Splunk indexer.  Some IP changes were made that I wasn't aware of and didn't notice it until now.  Once I updated the deployment app's outputs.conf file with the correct IP address, the cooked connection error went away and logs were getting to the indexer. Thanks.
The second part worked great!  thank you!