All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The second part worked great!  thank you!
How was the file created?  Have you tried changing the line endings (notepad++ can do this, perhaps other editors can as well)?
An event that does not have a timestamp will not have date_* fields.  That includes events where DATETIME_CONFIG=current or DATETIME_CONFIG=none.
You should be able to export _raw search results from a search head as a flat text file. You can see the export button next to the dropdown selector for search mode to run. From here you just se... See more...
You should be able to export _raw search results from a search head as a flat text file. You can see the export button next to the dropdown selector for search mode to run. From here you just select "Raw Events" and name the file, then click "Export"    
Hello, I have a standalone Splunk Enterprise system (version 9.x) with 10 UFs reporting (Splunk Enterprise and the UFs are all Windows OSs) - the Splunk Enterprise standalone system is an all-in-one... See more...
Hello, I have a standalone Splunk Enterprise system (version 9.x) with 10 UFs reporting (Splunk Enterprise and the UFs are all Windows OSs) - the Splunk Enterprise standalone system is an all-in-one: indexer, search head, deployment server, license manager, monitoring console... I created a deployment app which to push out a standard outputs.conf file to all the UFs and it pushed out successfully, just like all the other deployment apps.  I deleted the ~etc\system\local\outputs.conf from the UFs, restarted Splunk UF, made sure that the deployment app showed up in ~etc\apps\ (it did).  But now that the outputs.conf is no longer in ~etc\system\local, I'm getting this: WARN AutoLoadBalancedConnectionStrategy [pid TcpOutEloop] - cooked connection to ip=<xx.xx.xxx.xxx>:9997 timed out  I've made sure there isn't any other outputs.conf, especially not in ~etc\system\local it that it doesn't mess with the order of precedence, restared the UF, and everytime I get the same Warning...and of course, the logs aren't being sent to the indexer.  But it does still phone home, but no actual logs. When I run: btool --debut outputs.conf list  I don't get any output. But as soon as I get rid of this deployment app and put the same outputs.conf file back in ~etc\system\local, restart the UF, logs are being sent to the indexer.  And my deployment app's structure is the same as the other deployment apps that do work...What am I doing wrong? Thanks.
  I currently find myself collecting logs using the windows universal forwarder, my client has requested a copy of the logs that have been collected from the windows sources for the last 2 months. ... See more...
  I currently find myself collecting logs using the windows universal forwarder, my client has requested a copy of the logs that have been collected from the windows sources for the last 2 months. Is there any way to access this information or the only way is to run a query like index=main |fields _raw    
I think your lookup will need to be applied as an Automatic lookup for the srv_name field to be recognized at search time and work at the srchFilter role restriction level. And probably the permissi... See more...
I think your lookup will need to be applied as an Automatic lookup for the srv_name field to be recognized at search time and work at the srchFilter role restriction level. And probably the permissions for the CSV, Lookup Definition, and Autolookup need to all be available for the role that the restriction is being applied.
Thanks @richgalloway  I was able to see some data using the source mentioned below. However the "rawSizeBytes" field does not match the index size when converted to GB.    source=splunk-storage... See more...
Thanks @richgalloway  I was able to see some data using the source mentioned below. However the "rawSizeBytes" field does not match the index size when converted to GB.    source=splunk-storage-detail 
Based on your description it sounds like you're wanting something pretty custom for your environment.  There's not quite this type of data-splitting, alerting, and re-display framework in Splunk.   ... See more...
Based on your description it sounds like you're wanting something pretty custom for your environment.  There's not quite this type of data-splitting, alerting, and re-display framework in Splunk.   First suggestion: check out Splunkbase for any add-on's having to do with alerting.  Maybe one of these has a good-enough implementation for what you need.  Here's the results of the keyword "alert" for all the apps out there: https://splunkbase.splunk.com/apps?keyword=alert   From what I understand of your description, a big part of what you want is a meaningful display of information to the person handling the alert.  One way to solve is to create a Splunk dashboard that expects inputs via the URL - just like the ?keyword=alert in the above URL.  When you include values like that in the URL for a dashboard you can access those as tokens (within your SimpleXML, for example).   A lot of times there is the SPL that goes into triggering an alert, and those results have a lot of "plumbing" data in them that helped trigger the alert.  So that result set isn't very actionable by the responder.  This is why you would create a custom dashboard that expects token inputs (like a timeframe and hostname), and then it renders visualizations for that host in that timeframe that helps them troubleshoot in response to the email. If you haven't already, install the Splunk Dashboard Examples app - it has a lot of good tips and tricks for creating dashboards.
If ADD_EXTRA_TIME_FIELDS = true then why wouldn't those fields be present in every event? How could we ensure that those fields are present in every event?
Is it possible to create a Splunk App with trial feature? Trial in the sense that it will run for x days with full features (trial time) and after x days, if client-code/pass (or some kind of licens... See more...
Is it possible to create a Splunk App with trial feature? Trial in the sense that it will run for x days with full features (trial time) and after x days, if client-code/pass (or some kind of license) is not provided by user, it stops working or continues with reduced features? Where can I get any instruction on how to do this? If possible, can such an App be published at Splunkbase? best regards Altin
If a fieldname has special characters in it, i.e. (".", "{", "}", ...) Then it may require to be wrapped in single quotes when used in an eval function. Example:   index=xyz | eval ... See more...
If a fieldname has special characters in it, i.e. (".", "{", "}", ...) Then it may require to be wrapped in single quotes when used in an eval function. Example:   index=xyz | eval evTime=strptime('agent.status.policy_refresh_at',"%Y-%m-%dT%H:%M:%S.%6NZ"), UpdateDate=strftime(evTime,"%Y-%m-%d"), UpdateTime=strftime(evTime,"%H:%M:%S.%1N") | table agent.status.policy_refresh_at, evTime, UpdateDate, UpdateTime, hostname   Output with sim data on my local instance.    
the where command is expecting some sort of boolean result after the logic statement is evaluated. The if() function you shared is passing just another logic statement. I think to do it in a where co... See more...
the where command is expecting some sort of boolean result after the logic statement is evaluated. The if() function you shared is passing just another logic statement. I think to do it in a where command would look something like this. | where if(((match('Type', "ADZ") AND match('Assetname', "^\S{2}Z")) OR NOT match('Type', "ADZ")), True(), False()) Note: This method is expecting the field Type and Assetname to both be available fields in the dataset up to the point of it's execution. So a simple example of making the "Type" field available from the multiselect would be <base_search> ``` make the multiselect token value an available field in the dataset ``` ``` Since it is common for multiselect token values to be formatted with double-quotes, doing a $<token_name>|s$ here should account for that ``` ``` It is assumed that the field "Assetname" is available and derived from <base_search> above. ``` | eval Type=$Type|s$ | where if(((match('Type', "ADZ") AND match('Assetname', "^\S{2}Z")) OR NOT match('Type', "ADZ")), True(), False()) Examples: (with ADZ in Type token) (without ADZ in Type token)  
While I'm trying to upload my csv file as lookup, encountering the error like  - "Encountered the following error while trying to save: File has no line endings" I had tried by removing extra spac... See more...
While I'm trying to upload my csv file as lookup, encountering the error like  - "Encountered the following error while trying to save: File has no line endings" I had tried by removing extra space and special characters from the header but still facing this issue. Along with that I had tried saving as different format of csv's line utf-8, csv-ms doc etc., but NO LUCK 
Monitor Windows data with PowerShell scripts - Splunk Documentation   Here is the update link to docs.
Table format get changed Please see picture instead  
Hi, We set up Security Command Center to send alerts to Splunk for detecting mining activity. However, I've observed that we're not receiving SCC logs in Splunk at the moment. What steps can we ta... See more...
Hi, We set up Security Command Center to send alerts to Splunk for detecting mining activity. However, I've observed that we're not receiving SCC logs in Splunk at the moment. What steps can we take to resolve this issue? Thanks
Thank you for an update. Looks like I am missing something. Eval statements do not produce the results My SPL statement --Query------ Index=xyz  | eval evTime=strptime(agent.status.policy_refre... See more...
Thank you for an update. Looks like I am missing something. Eval statements do not produce the results My SPL statement --Query------ Index=xyz  | eval evTime=strptime(agent.status.policy_refresh_at,"%Y-%m-%dT%H:%M:%S.%6NZ") | eval UpdateDate=strftime(evTime,"%Y-%m-%d") | eval UpdateTime=strftime(evTime,"%H:%M:%S.%1N") | table agent.status.policy_refresh_at, evTime, UpdateDate, UpdateTime, hostname ----------------- agent.status.policy_refresh_at evTime UpdateDate UpdateTime          hostname 2024-01-04T10:31:35.529752Z       CN******* 2024-01-04T10:31:51.654448Z       CN******* 2023-11-26T05:57:47.775675Z       gb******** 2024-01-04T10:32:14.416359Z       cn******** 2024-01-04T10:30:32.998086Z       cn*******
Hi I want to migrate or move the Splunk instance from a Mac to a Windows Server 2019. I want to make sure this license is moved to the new machine. Is there a step-by-step process to to perform this... See more...
Hi I want to migrate or move the Splunk instance from a Mac to a Windows Server 2019. I want to make sure this license is moved to the new machine. Is there a step-by-step process to to perform this activity? Thanks.
Does this give you the intended behaviour?   index=proxy c_ip=$cip$ cs_host=$cshost$ action=$action$ (dest_ip=$destip$ OR NOT dest_ip=*)   I think including an (dest_ip=$destip$ OR NOT dest_ip=*)... See more...
Does this give you the intended behaviour?   index=proxy c_ip=$cip$ cs_host=$cshost$ action=$action$ (dest_ip=$destip$ OR NOT dest_ip=*)   I think including an (dest_ip=$destip$ OR NOT dest_ip=*) will search any token input but also include results for events that don't have a dest_ip field in them. The only issue I see with this is that if a dashboard user is looking for a specific dest_ip now then they will get results matching all the other field criteria and have null dest_ip. Maybe if you wanted to filter off the events with null dest_ip when a specific dest_ip is being searched (anything other than "*") then you could add some additional filter criteria. index=proxy c_ip=$cip$ cs_host=$cshost$ action=$action$ (dest_ip=$destip$ OR NOT dest_ip=*) | eval filter_off=if(NOT "$destip$"=="*" AND isnull(dest_ip), 1, 0) | where 'filter_off'==0