All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I apologize for giving wrong information.  IPv6 is 128-bit, not 64 bit.  Given this lookup table and advanced option match_type CIDR(ip): expected ip true 2001:0db8:ffff:ffff:ffff:ffff:ffff:... See more...
I apologize for giving wrong information.  IPv6 is 128-bit, not 64 bit.  Given this lookup table and advanced option match_type CIDR(ip): expected ip true 2001:0db8:ffff:ffff:ffff:ffff:ffff:ff00/128 test mask 2001:db8:3333:4444:5555:6666::2101/128 test without mask 2001:db8:3333:4444:5555:6666::2101 This search now gives the correct output   | makeresults | fields - _time | eval ip=mvappend("2001:db8:3333:4444:5555:6666:0:2101", "2001:db8:3333:4444:5555:6666::2101", "2001:0db8:ffff:ffff:ffff:ffff:ffff:ff00") | mvexpand ip | lookup ipv6test ip   expected ip test mask 2001:db8:3333:4444:5555:6666:0:2101 test mask 2001:db8:3333:4444:5555:6666::2101 true 2001:0db8:ffff:ffff:ffff:ffff:ffff:ff00 Hope this helps.
I'm assuming that the src and destination are in the same event, so geostats will not expand multivalue fields, so you will first have to duplicate the events and then do the geostats, like this ind... See more...
I'm assuming that the src and destination are in the same event, so geostats will not expand multivalue fields, so you will first have to duplicate the events and then do the geostats, like this index="duo" extracted_eventtype=authentication_v2 user.name="$user.name$" (access_device.ip!="NULL" OR auth_device.ip!="NULL") | eval ip=mvappend('access_device.ip', 'auth_device.ip') | fields ip | mvexpand ip | iplocation ip | geostats count by City  
That's because at index time (when Splunk ingests data), fields like UserKey_ABC.job1 doesn't exist.  They are extracted at search time by some mechanism, but do not exist in indexer.
You did not show the top level nodes. (And it's always a bad idea to use screenshots to show data; use raw text.) If your upper array node is indeed called tokenData, Splunk should have something li... See more...
You did not show the top level nodes. (And it's always a bad idea to use screenshots to show data; use raw text.) If your upper array node is indeed called tokenData, Splunk should have something like tokenData{}.tokenData, tokenData{}.tokenId, etc.  To spread them out, first reach to that array with spath.  That will convert the JSON array to ordinary multivalue tokenData{} so you can use mvexpand.  Lastly, use spath again with each element to extract single value tokenData, tokenId. | spath path=tokenData{} | mvexpand tokenData{} | spath input=tokenData{} Hope this helps.
@yuanliu I'm fairly new to Splunk this year.   Can you explain what you mean?  - "You can still use the fields in statistical functions" I've tried  | tstats count where index=abc  Arguments.email=... See more...
@yuanliu I'm fairly new to Splunk this year.   Can you explain what you mean?  - "You can still use the fields in statistical functions" I've tried  | tstats count where index=abc  Arguments.email=“myemail@abc.com" by                       device_build, Arguments.test_url, UserKey_ABC.job1   | rename UserKey_ABC.Day as day,                       UserKey_ABC.job1 as job1   But that didn't work for me either. Thanks.
tstats only operates on indexed fields.  You can still use the fields in statistical functions.  So, you need to define how you want to see these values, and you cannot use them in groupby.  
Hello, When I was trying to Add Account under AWS Configuration from SPLUNK UI, getting "SSL validation failed for https://*.amazonaws.com [SSL_CERTIFICATE_VERIFY_FAILED] certificate verify failed: ... See more...
Hello, When I was trying to Add Account under AWS Configuration from SPLUNK UI, getting "SSL validation failed for https://*.amazonaws.com [SSL_CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate" error message. Any recommendation will be highly appreciated. Thank you so much.  
Hi all,  I've worked with multivalue fields in a limited capacity and I'm having trouble with a particular instance. Generally, multivalue fields I've worked have been small or had static indexing... See more...
Hi all,  I've worked with multivalue fields in a limited capacity and I'm having trouble with a particular instance. Generally, multivalue fields I've worked have been small or had static indexing, such that I could use mvindexing or simple renaming to extract the value I needed. I've run across a situation in which I have a JSON array called 'tokenData' that is dynamically populated with smaller arrays of metadata such that the index is not static.  Example: There will be hundreds of these in the array in a single splunk event. What I need to do is access these fields and extract the tokenData where the tokenId is a specific value, and compare that with other elements of the search.  Example:  tokenId: 105 tokenLength:70 tokenData: blahblah I need to extract this into a field and check it's value within the context of an alert. There will be some processing of the actual field as well, but that should be easy if I can get the value, correlated with the ID.  Things I know: tokenId needed will always be static, tokenLength of said tokenId will always be static, tokenData will change depending on the situation.  What is the best way to get this value consistently, when the array is not static? I'd need the value of the field tokenData wherever tokenId=target. Hope this was clear. Thanks  
I'm having trouble capturing the custom key - "UserKey_ABC" in the following script.   With the following code, I'm not able to see any results.  However, if I remove "UserKey_ABC", I am able to get ... See more...
I'm having trouble capturing the custom key - "UserKey_ABC" in the following script.   With the following code, I'm not able to see any results.  However, if I remove "UserKey_ABC", I am able to get the results.  I'm certain I do have this key in my events.  How do I approach this issue?   | tstats count where index=abc  Arguments.email=“myemail@abc.com" by                       device_build, Arguments.test_url, UserKey_ABC                       | rename UserKey_ABC.Day as day,                       UserKey_ABC.job1 as job1,                      UserKey_ABC.Version as version,                       Arguments.test_url as test_url,                       device_build as build                       | table build, lib, day, job1, version, test_url
At this point, you will basically be editing the various .conf (inputs, outputs, props, transforms) files in Notepad or some other text editor.  The CLI will mostly for issuing commands to Splunk, wh... See more...
At this point, you will basically be editing the various .conf (inputs, outputs, props, transforms) files in Notepad or some other text editor.  The CLI will mostly for issuing commands to Splunk, which in the beginning will be mostly ./splunk stop,  ./splunk start and ./splunk restart. Are you running headless, or do you have access to the web interface?
1. You're not trying to route to two indexes but to two indexers. 2. If you want the event to be sent to both those groups, you're gonna have to clone the event (maybe you can do it easier with inge... See more...
1. You're not trying to route to two indexes but to two indexers. 2. If you want the event to be sent to both those groups, you're gonna have to clone the event (maybe you can do it easier with ingest action). With your configuration the second transform overwrites the result of the first one so all your events will go to successGroup.
This works!! Thank you
It's not up to Splunk to configure your logging. Typically if you download an add-on from splunkbase it has a docs page which describes how to configure source to produce relevant logs.
And that makes sense. I assume that your script produces several events per host - one for each piece of software installed. So if you just filter the raw events to get only those not being a Carbon ... See more...
And that makes sense. I assume that your script produces several events per host - one for each piece of software installed. So if you just filter the raw events to get only those not being a Carbon Black inventory/installation/whatever report you'll still get all remaining software from that host so this host will still be in your results. You need to firstly group your results by host and then filter to get only those without Carbon Black index=windowsevent sourcetype="Script:InstalledApps" | stats values(DisplayName) as DisplayName by host | search NOT DisplayName="Carbon Black*" One additional word of explanation - the last line - the search command makes use of how Splunk processes matching for multivalued fields - it tries every value from a multivalued field to decide whether it can find matching one or not.
@kapenta  To adjust the map to see only a portion of it, such as just the USA you need to edit the Latitude, Longitude, and Zoom settings for the map. You can set these in Format Visualization, o... See more...
@kapenta  To adjust the map to see only a portion of it, such as just the USA you need to edit the Latitude, Longitude, and Zoom settings for the map. You can set these in Format Visualization, on the General page. The following settings work for me for the continental USA: - Latitude: 38.62 - Longitude: -93.91 - Zoom: 4 I was not happy with my attempts to show all 50 states since (including Alaska and Hawaii) because of the excess empty/irrelevant space (oceans and Canada) shown. Alternatively, when editing the dashboard, you can zoom in and out and move the map around and if you are satisfied with how you have got it looking, you can save these settings by going to Format Visualization and on the General page click on Populate with current map settings. (Refer to the attached map-usa.jpg file to see the settings I mentioned.)
Hi @gcusello ,   I used your logic, but with a small change in the function used. (floor instead or round). Would this make more sense to you?     | eval duration=round((now() - last_seen... See more...
Hi @gcusello ,   I used your logic, but with a small change in the function used. (floor instead or round). Would this make more sense to you?     | eval duration=round((now() - last_seen),0) | eval days=if(duration>86400,floor(duration/86400),"0"), hours=if(duration>3600,floor((duration-days*86400)/3600),"0"), minutes=if(duration>60,floor((duration-days*86400-hours*3600)/60),"0"), seconds=duration-days*86400-hours*3600-minutes*60 | eval Output=days.if(days>0," days ","").hours.if(hours>0," hours ","").minutes.if(minutes>0," minutes ","").seconds." seconds"    
The goal of this project is to create consistent logging across all servers in the environment. What tools exist on Splunk to achieve this? We are already ingesting existing logs properly.
From 8.2.x, i get always the window for EXPLORE SPLUNK ENTERPRISE always open, also if i CLOSE it. In previous version (7+), when i closed it in Launcher app, instance set it as closed until i re-op... See more...
From 8.2.x, i get always the window for EXPLORE SPLUNK ENTERPRISE always open, also if i CLOSE it. In previous version (7+), when i closed it in Launcher app, instance set it as closed until i re-opened it. From 8.2.x it opens everytime i open my UI, also if i previously click the close button and "make it to sleep", hiding it. Any helps?
***Notable response actions and risk response actions are always triggered for each result.
Nevermind, had a POST instead of GET