All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Awesome that seemed to do it, thank you so much. index="azure-activity" | spath input=_raw path=properties.targetResources{}.modifiedProperties{} output=hold | eval hold = mvfilter(like(hold,"%Group... See more...
Awesome that seemed to do it, thank you so much. index="azure-activity" | spath input=_raw path=properties.targetResources{}.modifiedProperties{} output=hold | eval hold = mvfilter(like(hold,"%Group.DisplayName%")) | spath input=hold path=newValue output=NewGroupName | search operationName="Add member to group" | stats count by "properties.initiatedBy.user.userPrincipalName", "properties.targetResources{}.userPrincipalName", NewGroupName, operationName, _time
The default csv sourcetype has INDEXED_EXTRACTIONS=csv It changes how the data is processed. Even if the SEDCMD is applied (of which I'm not sure), the fields are already extracted and since you're... See more...
The default csv sourcetype has INDEXED_EXTRACTIONS=csv It changes how the data is processed. Even if the SEDCMD is applied (of which I'm not sure), the fields are already extracted and since you're only editing _raw, you're not changing already extracted fields.
I see report by ReportKey now, but graph is leaner I wonder how I can get something like one in articale.
Then I should expect it's as you said - something about file locking. There is another input type for windows which might be able to help here - MonitorNoHandle. But it has quite a few limitations, j... See more...
Then I should expect it's as you said - something about file locking. There is another input type for windows which might be able to help here - MonitorNoHandle. But it has quite a few limitations, judging from the spec. And I've never used it so I can't tell you how it performs.
I just found this, from this admin guide: To anonymize data with Splunk Enterprise, you must configure a Splunk Enterprise instance as a heavy forwarder and anonymize the incoming data with that i... See more...
I just found this, from this admin guide: To anonymize data with Splunk Enterprise, you must configure a Splunk Enterprise instance as a heavy forwarder and anonymize the incoming data with that instance before sending it to Splunk Enterprise. Previously in other documents it had said this can be performed on either the Indexer OR a Heavy Forwarder. I wonder if this is why it isn't working? https://docs.splunk.com/Documentation/Splunk/9.2.0/Data/Anonymizedata
Thank you very much.
A simple mistake. You forgot quotes around values you want to assign to the ReportKey field so Splunk treats those values as field names. As you apparently have no such fields in yoir data you end up... See more...
A simple mistake. You forgot quotes around values you want to assign to the ReportKey field so Splunk treats those values as field names. As you apparently have no such fields in yoir data you end up with empty (null) values.
Hi. I found old article on the subject and followed, but I do not see overlaying charts. My SPL ------------- index=firewall sourcetype="collector" fqdn="fw.myorg.com" earliest=-2d@d latest=-1d... See more...
Hi. I found old article on the subject and followed, but I do not see overlaying charts. My SPL ------------- index=firewall sourcetype="collector" fqdn="fw.myorg.com" earliest=-2d@d latest=-1d@d | multikv | eval ReportKey=today | append [search index=firewall sourcetype="collector" fqdn="fw.myorg.com" earliest=-4d@d latest=-3d@d | multikv | eval ReportKey=yesterday | eval _time = _time + 2*86400] | timechart span=1H count by ReportKey ------------- So I expect it would report by ReportKey instead it shows by NULL --- -------------
Hi All,   Trust all is good on your end!! Recently moved to Splunk Mission Control and today I stumble upon one issue. There was few incident I works on 13th and on the same day itself I acknowle... See more...
Hi All,   Trust all is good on your end!! Recently moved to Splunk Mission Control and today I stumble upon one issue. There was few incident I works on 13th and on the same day itself I acknowledged & closed those incidents, now today I checked that incident ID and I found some are “in progress” or “new” state and the last updated day is showings today.s date I'm not getting how to fix this issue why there is some discrepancy.  Can someone please assist me to address this issue.    Thanks Debjit 
    Hi I am fairly new to Splunk , thank you in advance if you can help me...:) My goal is to log the service response duration each time a ESService is called. The ESService value can be anything... See more...
    Hi I am fairly new to Splunk , thank you in advance if you can help me...:) My goal is to log the service response duration each time a ESService is called. The ESService value can be anything. In the table format below I am able to see which service is being hit and the duration .   But in the visualization section, all the events showing the same color, Is there anyway to show different color for each ESService . For example , when ESBusinessrep blue, for ESPerson red etc.(dynamically there can be N number of service types). And when I hover on the bars they are showing time, and duration values only not the ESService. How to achieve this?  
How do I assign value to list or array and use it in where condition? Thank you in advance!! For example: I tried to search if number 4 is in array/list of number between 0 to 6.      inde... See more...
How do I assign value to list or array and use it in where condition? Thank you in advance!! For example: I tried to search if number 4 is in array/list of number between 0 to 6.      index = test | eval list-var = (0,1,2,3,4,5,6) | eval num = 4 | search num IN list-var
This could be caused because the host values are not becoming equal. Could you try your initial search but with the "| eval hosts = lower(hosts)" command at the end?
I have the following stanza in etc\system\local\inputs.conf. However I don't see dynamic DNS update events being forwarded to the Splunk server. Local event viewer shows events after "ipconfig /rel... See more...
I have the following stanza in etc\system\local\inputs.conf. However I don't see dynamic DNS update events being forwarded to the Splunk server. Local event viewer shows events after "ipconfig /release" followed by "ipconfig /renew" I also tried [WinEventLog://DNS Server] as stanza name, to no avail. Appreciate any insight. Thanks, Billy [WinEventLog://Microsoft-Windows-DNS-Server/Audit] disabled = 0 renderXml = 1 whitelist = 519, 520
When trying this, the result was the same as the previous attempt, only the hosts and username fields populating
Hi @mik3y  Thanks for the update and the workaround solution. In the end we moved away from this solution anyway as the Salesforce streaming API did not provide ability to track the events that... See more...
Hi @mik3y  Thanks for the update and the workaround solution. In the end we moved away from this solution anyway as the Salesforce streaming API did not provide ability to track the events that had already been ingested, potentially resulting in missed data during Splunk maintenance. 
Ack, seems I forgot to rename the hostname field to hosts, thus ruining the stats. (index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2) | rename device.hostname as host... See more...
Ack, seems I forgot to rename the hostname field to hosts, thus ruining the stats. (index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2) | rename device.hostname as hosts | rename device.username as username | eval hosts = lower(hosts) | stats values(*) as * by hosts | table hosts, username, vendors, products, versions The trick is to get the hosts values (e.g. HOST1 and host1) to be in the same case (hence the lower()), then if you do "stats values(*) as * by host" , then it will put together all the values for the other columns on one row for each unique value of hosts. One for host1, one for host2, and so on.    
Does this work better? | spath input=_raw path=details output=hold | rex field=hold "\"(?<kvs>[^\"]*\"*[^\"]*\"*[^\"]*\"*)\"" max_match=0 | stats values(*) as * by kvs | rex field=kvs "(?<key>[^\"]*... See more...
Does this work better? | spath input=_raw path=details output=hold | rex field=hold "\"(?<kvs>[^\"]*\"*[^\"]*\"*[^\"]*\"*)\"" max_match=0 | stats values(*) as * by kvs | rex field=kvs "(?<key>[^\"]*)\" : \"(?<value>[^\"]*)" max_match=0 | table orderNum key value orderLocation
You may have to fiddle with the search query so that you URL-encode only the problematic characters in the eval statements, but leave alone the literal values used in filtering.
Normally I would, however I am running into an issue where: 1. I am querying a file attachment from ServiceNow that returns download URL(s) (can be an arbitrary number of URLs) presented by the AP... See more...
Normally I would, however I am running into an issue where: 1. I am querying a file attachment from ServiceNow that returns download URL(s) (can be an arbitrary number of URLs) presented by the API. 2. The URL(s) contain the file sys_id, e.g. "/api/now/1111111122222233333/file", and do not offer a way to download as the file name, requiring it to be renamed once in the container vault.  3. The HTTP app downloads the URL(s) as the generic file name "file" in a container vault and creates a vault_id. 4. I need to rename that file to the correct file name (from the ServiceNow data), using a bit of vault_add() magic. Step 4 is where I lose the ability to reliably associate the original file name (via the Service Now sys_id) with the vault_id when passing multiple URLs. I don't see an easy or reliable way to capture the original file name and associate it with the correct vault_id. The attempted work-around thought is using phantom.act() in this manner where I can control the loop and guarantee the correct vault_id.
So, I tried your solution and the result was: hosts        username      vendors              products          versions host1                 user1       host2                 user2      host3... See more...
So, I tried your solution and the result was: hosts        username      vendors              products          versions host1                 user1       host2                 user2      host3                 user3      host4                 user4       Also, I'm assuming you meant for the search to look like this: (index=index1 sourcetype=sourcetype1) OR (index=index2 sourcetype=sourcetype2) | rename device.hostname as hostname | rename device.username as username | eval hosts = coalesce(hostnames, hostname) | eval hosts = lower(hosts) | stats values(*) as * by hosts | table hosts, username, vendors, products, versions Otherwise, the search wouldn't yield any results