All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Is the CSV file well-formatted?  Missing quotes or unescaped embedded quotes (or commas) may affect how the file is loaded.
Per documentation: https://docs.splunk.com/Documentation/Splunk/9.1.2/Viz/ChartConfigurationReference The property charting.chart.showDataLabels only allow the type (all | minmax | none). I am atte... See more...
Per documentation: https://docs.splunk.com/Documentation/Splunk/9.1.2/Viz/ChartConfigurationReference The property charting.chart.showDataLabels only allow the type (all | minmax | none). I am attempting to hide data labels for a specific field, but enable data labels for other specified fields. I am attempting to do something similar to charting.fieldColors which uses maps, but the types are obviously not accepted for the showDataLabels property:   <option name="charting.chart.showDataLabels"> {"field1":none, "field2":all} </option>   Is there a workaround possible for this objective?
Basically we export from Airtable as a csv.  I changed in notepad++,  view->show symbols-> show all characters, and  edit->EOL Conversion->Windows Format. but that doesn't work.
I have a report that lists malware received by email that is part of a dashboard. Some months the list for each person can have dozens of events listed. Management would like to only show the latest ... See more...
I have a report that lists malware received by email that is part of a dashboard. Some months the list for each person can have dozens of events listed. Management would like to only show the latest 5 events for each person. I'm having difficulty finding a good way to accomplish this. Search: index="my_index" [| inputlookup InfoSec-avLookup.csv | rename emailaddress AS msg.parsedAddresses.to{}] final_module="av" final_action="discard" | rename msg.parsedAddresses.to{} AS To, envelope.from AS From, msg.header.subject AS Subject, filter.modules.av.virusNames{} AS Virus_Type | eval Time=strftime(_time,"%H:%M:%S %m/%d/%y") | stats count, list(From) as From, list(Subject) as Subject, list(Time) as Time, list(Virus_Type) as Virus_Type by To | search [| inputlookup InfoSec-avLookup.csv | rename emailaddress AS To] | sort -count | table Time,To,From,Subject,Virus_Type | head 5 Current Output: time - user1 - sender1@xyz.com - Subject1 - Virus_A time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_C time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time - user2 - sender1@xyz.com - Subject1 - Virus_A time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_C time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time - user3 - sender1@xyz.com - Subject1 - Virus_A time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_C time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B I'd like to limit it to the latest 5 events by user time - user1 - sender1@xyz.com - Subject1 - Virus_A time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_C time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time - user2 - sender1@xyz.com - Subject1 - Virus_A time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_C time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B time - user3 - sender1@xyz.com - Subject1 - Virus_A time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_C time -              - sender2@xyz.com - Subject1 - Virus_B time -              - sender2@xyz.com - Subject1 - Virus_B Any help greatly appreciated! Thank you!  
Solved.  My deployment app's outputs.conf file was using the wrong IP address of the Splunk indexer.  Some IP changes were made that I wasn't aware of and didn't notice it until now.  Once I updated ... See more...
Solved.  My deployment app's outputs.conf file was using the wrong IP address of the Splunk indexer.  Some IP changes were made that I wasn't aware of and didn't notice it until now.  Once I updated the deployment app's outputs.conf file with the correct IP address, the cooked connection error went away and logs were getting to the indexer. Thanks.
The second part worked great!  thank you!
How was the file created?  Have you tried changing the line endings (notepad++ can do this, perhaps other editors can as well)?
An event that does not have a timestamp will not have date_* fields.  That includes events where DATETIME_CONFIG=current or DATETIME_CONFIG=none.
You should be able to export _raw search results from a search head as a flat text file. You can see the export button next to the dropdown selector for search mode to run. From here you just se... See more...
You should be able to export _raw search results from a search head as a flat text file. You can see the export button next to the dropdown selector for search mode to run. From here you just select "Raw Events" and name the file, then click "Export"    
Hello, I have a standalone Splunk Enterprise system (version 9.x) with 10 UFs reporting (Splunk Enterprise and the UFs are all Windows OSs) - the Splunk Enterprise standalone system is an all-in-one... See more...
Hello, I have a standalone Splunk Enterprise system (version 9.x) with 10 UFs reporting (Splunk Enterprise and the UFs are all Windows OSs) - the Splunk Enterprise standalone system is an all-in-one: indexer, search head, deployment server, license manager, monitoring console... I created a deployment app which to push out a standard outputs.conf file to all the UFs and it pushed out successfully, just like all the other deployment apps.  I deleted the ~etc\system\local\outputs.conf from the UFs, restarted Splunk UF, made sure that the deployment app showed up in ~etc\apps\ (it did).  But now that the outputs.conf is no longer in ~etc\system\local, I'm getting this: WARN AutoLoadBalancedConnectionStrategy [pid TcpOutEloop] - cooked connection to ip=<xx.xx.xxx.xxx>:9997 timed out  I've made sure there isn't any other outputs.conf, especially not in ~etc\system\local it that it doesn't mess with the order of precedence, restared the UF, and everytime I get the same Warning...and of course, the logs aren't being sent to the indexer.  But it does still phone home, but no actual logs. When I run: btool --debut outputs.conf list  I don't get any output. But as soon as I get rid of this deployment app and put the same outputs.conf file back in ~etc\system\local, restart the UF, logs are being sent to the indexer.  And my deployment app's structure is the same as the other deployment apps that do work...What am I doing wrong? Thanks.
  I currently find myself collecting logs using the windows universal forwarder, my client has requested a copy of the logs that have been collected from the windows sources for the last 2 months. ... See more...
  I currently find myself collecting logs using the windows universal forwarder, my client has requested a copy of the logs that have been collected from the windows sources for the last 2 months. Is there any way to access this information or the only way is to run a query like index=main |fields _raw    
I think your lookup will need to be applied as an Automatic lookup for the srv_name field to be recognized at search time and work at the srchFilter role restriction level. And probably the permissi... See more...
I think your lookup will need to be applied as an Automatic lookup for the srv_name field to be recognized at search time and work at the srchFilter role restriction level. And probably the permissions for the CSV, Lookup Definition, and Autolookup need to all be available for the role that the restriction is being applied.
Thanks @richgalloway  I was able to see some data using the source mentioned below. However the "rawSizeBytes" field does not match the index size when converted to GB.    source=splunk-storage... See more...
Thanks @richgalloway  I was able to see some data using the source mentioned below. However the "rawSizeBytes" field does not match the index size when converted to GB.    source=splunk-storage-detail 
Based on your description it sounds like you're wanting something pretty custom for your environment.  There's not quite this type of data-splitting, alerting, and re-display framework in Splunk.   ... See more...
Based on your description it sounds like you're wanting something pretty custom for your environment.  There's not quite this type of data-splitting, alerting, and re-display framework in Splunk.   First suggestion: check out Splunkbase for any add-on's having to do with alerting.  Maybe one of these has a good-enough implementation for what you need.  Here's the results of the keyword "alert" for all the apps out there: https://splunkbase.splunk.com/apps?keyword=alert   From what I understand of your description, a big part of what you want is a meaningful display of information to the person handling the alert.  One way to solve is to create a Splunk dashboard that expects inputs via the URL - just like the ?keyword=alert in the above URL.  When you include values like that in the URL for a dashboard you can access those as tokens (within your SimpleXML, for example).   A lot of times there is the SPL that goes into triggering an alert, and those results have a lot of "plumbing" data in them that helped trigger the alert.  So that result set isn't very actionable by the responder.  This is why you would create a custom dashboard that expects token inputs (like a timeframe and hostname), and then it renders visualizations for that host in that timeframe that helps them troubleshoot in response to the email. If you haven't already, install the Splunk Dashboard Examples app - it has a lot of good tips and tricks for creating dashboards.
If ADD_EXTRA_TIME_FIELDS = true then why wouldn't those fields be present in every event? How could we ensure that those fields are present in every event?
Is it possible to create a Splunk App with trial feature? Trial in the sense that it will run for x days with full features (trial time) and after x days, if client-code/pass (or some kind of licens... See more...
Is it possible to create a Splunk App with trial feature? Trial in the sense that it will run for x days with full features (trial time) and after x days, if client-code/pass (or some kind of license) is not provided by user, it stops working or continues with reduced features? Where can I get any instruction on how to do this? If possible, can such an App be published at Splunkbase? best regards Altin
If a fieldname has special characters in it, i.e. (".", "{", "}", ...) Then it may require to be wrapped in single quotes when used in an eval function. Example:   index=xyz | eval ... See more...
If a fieldname has special characters in it, i.e. (".", "{", "}", ...) Then it may require to be wrapped in single quotes when used in an eval function. Example:   index=xyz | eval evTime=strptime('agent.status.policy_refresh_at',"%Y-%m-%dT%H:%M:%S.%6NZ"), UpdateDate=strftime(evTime,"%Y-%m-%d"), UpdateTime=strftime(evTime,"%H:%M:%S.%1N") | table agent.status.policy_refresh_at, evTime, UpdateDate, UpdateTime, hostname   Output with sim data on my local instance.    
the where command is expecting some sort of boolean result after the logic statement is evaluated. The if() function you shared is passing just another logic statement. I think to do it in a where co... See more...
the where command is expecting some sort of boolean result after the logic statement is evaluated. The if() function you shared is passing just another logic statement. I think to do it in a where command would look something like this. | where if(((match('Type', "ADZ") AND match('Assetname', "^\S{2}Z")) OR NOT match('Type', "ADZ")), True(), False()) Note: This method is expecting the field Type and Assetname to both be available fields in the dataset up to the point of it's execution. So a simple example of making the "Type" field available from the multiselect would be <base_search> ``` make the multiselect token value an available field in the dataset ``` ``` Since it is common for multiselect token values to be formatted with double-quotes, doing a $<token_name>|s$ here should account for that ``` ``` It is assumed that the field "Assetname" is available and derived from <base_search> above. ``` | eval Type=$Type|s$ | where if(((match('Type', "ADZ") AND match('Assetname', "^\S{2}Z")) OR NOT match('Type', "ADZ")), True(), False()) Examples: (with ADZ in Type token) (without ADZ in Type token)  
While I'm trying to upload my csv file as lookup, encountering the error like  - "Encountered the following error while trying to save: File has no line endings" I had tried by removing extra spac... See more...
While I'm trying to upload my csv file as lookup, encountering the error like  - "Encountered the following error while trying to save: File has no line endings" I had tried by removing extra space and special characters from the header but still facing this issue. Along with that I had tried saving as different format of csv's line utf-8, csv-ms doc etc., but NO LUCK 
Monitor Windows data with PowerShell scripts - Splunk Documentation   Here is the update link to docs.