All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@yuanliu I'm fairly new to Splunk this year.   Can you explain what you mean?  - "You can still use the fields in statistical functions" I've tried  | tstats count where index=abc  Arguments.email=... See more...
@yuanliu I'm fairly new to Splunk this year.   Can you explain what you mean?  - "You can still use the fields in statistical functions" I've tried  | tstats count where index=abc  Arguments.email=“myemail@abc.com" by                       device_build, Arguments.test_url, UserKey_ABC.job1   | rename UserKey_ABC.Day as day,                       UserKey_ABC.job1 as job1   But that didn't work for me either. Thanks.
tstats only operates on indexed fields.  You can still use the fields in statistical functions.  So, you need to define how you want to see these values, and you cannot use them in groupby.  
Hello, When I was trying to Add Account under AWS Configuration from SPLUNK UI, getting "SSL validation failed for https://*.amazonaws.com [SSL_CERTIFICATE_VERIFY_FAILED] certificate verify failed: ... See more...
Hello, When I was trying to Add Account under AWS Configuration from SPLUNK UI, getting "SSL validation failed for https://*.amazonaws.com [SSL_CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate" error message. Any recommendation will be highly appreciated. Thank you so much.  
Hi all,  I've worked with multivalue fields in a limited capacity and I'm having trouble with a particular instance. Generally, multivalue fields I've worked have been small or had static indexing... See more...
Hi all,  I've worked with multivalue fields in a limited capacity and I'm having trouble with a particular instance. Generally, multivalue fields I've worked have been small or had static indexing, such that I could use mvindexing or simple renaming to extract the value I needed. I've run across a situation in which I have a JSON array called 'tokenData' that is dynamically populated with smaller arrays of metadata such that the index is not static.  Example: There will be hundreds of these in the array in a single splunk event. What I need to do is access these fields and extract the tokenData where the tokenId is a specific value, and compare that with other elements of the search.  Example:  tokenId: 105 tokenLength:70 tokenData: blahblah I need to extract this into a field and check it's value within the context of an alert. There will be some processing of the actual field as well, but that should be easy if I can get the value, correlated with the ID.  Things I know: tokenId needed will always be static, tokenLength of said tokenId will always be static, tokenData will change depending on the situation.  What is the best way to get this value consistently, when the array is not static? I'd need the value of the field tokenData wherever tokenId=target. Hope this was clear. Thanks  
I'm having trouble capturing the custom key - "UserKey_ABC" in the following script.   With the following code, I'm not able to see any results.  However, if I remove "UserKey_ABC", I am able to get ... See more...
I'm having trouble capturing the custom key - "UserKey_ABC" in the following script.   With the following code, I'm not able to see any results.  However, if I remove "UserKey_ABC", I am able to get the results.  I'm certain I do have this key in my events.  How do I approach this issue?   | tstats count where index=abc  Arguments.email=“myemail@abc.com" by                       device_build, Arguments.test_url, UserKey_ABC                       | rename UserKey_ABC.Day as day,                       UserKey_ABC.job1 as job1,                      UserKey_ABC.Version as version,                       Arguments.test_url as test_url,                       device_build as build                       | table build, lib, day, job1, version, test_url
At this point, you will basically be editing the various .conf (inputs, outputs, props, transforms) files in Notepad or some other text editor.  The CLI will mostly for issuing commands to Splunk, wh... See more...
At this point, you will basically be editing the various .conf (inputs, outputs, props, transforms) files in Notepad or some other text editor.  The CLI will mostly for issuing commands to Splunk, which in the beginning will be mostly ./splunk stop,  ./splunk start and ./splunk restart. Are you running headless, or do you have access to the web interface?
1. You're not trying to route to two indexes but to two indexers. 2. If you want the event to be sent to both those groups, you're gonna have to clone the event (maybe you can do it easier with inge... See more...
1. You're not trying to route to two indexes but to two indexers. 2. If you want the event to be sent to both those groups, you're gonna have to clone the event (maybe you can do it easier with ingest action). With your configuration the second transform overwrites the result of the first one so all your events will go to successGroup.
This works!! Thank you
It's not up to Splunk to configure your logging. Typically if you download an add-on from splunkbase it has a docs page which describes how to configure source to produce relevant logs.
And that makes sense. I assume that your script produces several events per host - one for each piece of software installed. So if you just filter the raw events to get only those not being a Carbon ... See more...
And that makes sense. I assume that your script produces several events per host - one for each piece of software installed. So if you just filter the raw events to get only those not being a Carbon Black inventory/installation/whatever report you'll still get all remaining software from that host so this host will still be in your results. You need to firstly group your results by host and then filter to get only those without Carbon Black index=windowsevent sourcetype="Script:InstalledApps" | stats values(DisplayName) as DisplayName by host | search NOT DisplayName="Carbon Black*" One additional word of explanation - the last line - the search command makes use of how Splunk processes matching for multivalued fields - it tries every value from a multivalued field to decide whether it can find matching one or not.
@kapenta  To adjust the map to see only a portion of it, such as just the USA you need to edit the Latitude, Longitude, and Zoom settings for the map. You can set these in Format Visualization, o... See more...
@kapenta  To adjust the map to see only a portion of it, such as just the USA you need to edit the Latitude, Longitude, and Zoom settings for the map. You can set these in Format Visualization, on the General page. The following settings work for me for the continental USA: - Latitude: 38.62 - Longitude: -93.91 - Zoom: 4 I was not happy with my attempts to show all 50 states since (including Alaska and Hawaii) because of the excess empty/irrelevant space (oceans and Canada) shown. Alternatively, when editing the dashboard, you can zoom in and out and move the map around and if you are satisfied with how you have got it looking, you can save these settings by going to Format Visualization and on the General page click on Populate with current map settings. (Refer to the attached map-usa.jpg file to see the settings I mentioned.)
Hi @gcusello ,   I used your logic, but with a small change in the function used. (floor instead or round). Would this make more sense to you?     | eval duration=round((now() - last_seen... See more...
Hi @gcusello ,   I used your logic, but with a small change in the function used. (floor instead or round). Would this make more sense to you?     | eval duration=round((now() - last_seen),0) | eval days=if(duration>86400,floor(duration/86400),"0"), hours=if(duration>3600,floor((duration-days*86400)/3600),"0"), minutes=if(duration>60,floor((duration-days*86400-hours*3600)/60),"0"), seconds=duration-days*86400-hours*3600-minutes*60 | eval Output=days.if(days>0," days ","").hours.if(hours>0," hours ","").minutes.if(minutes>0," minutes ","").seconds." seconds"    
The goal of this project is to create consistent logging across all servers in the environment. What tools exist on Splunk to achieve this? We are already ingesting existing logs properly.
From 8.2.x, i get always the window for EXPLORE SPLUNK ENTERPRISE always open, also if i CLOSE it. In previous version (7+), when i closed it in Launcher app, instance set it as closed until i re-op... See more...
From 8.2.x, i get always the window for EXPLORE SPLUNK ENTERPRISE always open, also if i CLOSE it. In previous version (7+), when i closed it in Launcher app, instance set it as closed until i re-opened it. From 8.2.x it opens everytime i open my UI, also if i previously click the close button and "make it to sleep", hiding it. Any helps?
***Notable response actions and risk response actions are always triggered for each result.
Nevermind, had a POST instead of GET
Hi Mario, I'm trying to pull a list of the appids via curl using access token. Getting HTTP Status 405, Method Not Allowed, thoughts? curl -X POST -H "Content-Type: application/json;charset=UTF-9... See more...
Hi Mario, I'm trying to pull a list of the appids via curl using access token. Getting HTTP Status 405, Method Not Allowed, thoughts? curl -X POST -H "Content-Type: application/json;charset=UTF-9" "https://controller:443/controller/restui/eumApplications/getAllEumApplicationsData?time-range=last_1_hour.BEFORE_NOW.-1.-1.60" --header "Authorization: Bearer ${access_token}"
Hello, I am trying to use one cluster map to visualize the locations of a user's source and destination IPs for Duo logs. Currently, I have two separate cluster maps for each. Source IP Address Que... See more...
Hello, I am trying to use one cluster map to visualize the locations of a user's source and destination IPs for Duo logs. Currently, I have two separate cluster maps for each. Source IP Address Query: index="duo" extracted_eventtype=authentication_v2 user.name="$user.name$" access_device.ip!="NULL" | iplocation access_device.ip | geostats count by City   Destination IP Address Query: index="duo" extracted_eventtype=authentication_v2 user.name="$user.name$" auth_device.ip!="NULL" | iplocation auth_device.ip | geostats count by City   I'm somewhat new to visualizations and dashboarding, and was hoping for some assistance on writing a combined query that would display both source and destination IPs on a cluster map.
Im completely green using SPLUNK, I have downloaded enterprise, have a profile but I cannot seem to get it configured to work off of my home system. i.e testing SPLUNK on myself. The steps stop worki... See more...
Im completely green using SPLUNK, I have downloaded enterprise, have a profile but I cannot seem to get it configured to work off of my home system. i.e testing SPLUNK on myself. The steps stop working for at the step for :"Configure the universal forwarder using configuration files"  Im not understanding how to access the config settings through the CLI to move beyond this step.
Hello.  I'm trying to send log from heavy forwarder to 2 indexes. One is receiving logs, but the second is not. Here is the props.conf file: [test] TRANSFORMS-routing=errorRouting,successRouti... See more...
Hello.  I'm trying to send log from heavy forwarder to 2 indexes. One is receiving logs, but the second is not. Here is the props.conf file: [test] TRANSFORMS-routing=errorRouting,successRouting   Here is the outputs.conf file: [tcpout:errorGroup] server = 35.196.124.233:9997 [tcpout:successGroup] server = 34.138.8.216:9997   Here is the transforms.conf file: [errorRouting] REGEX=. DEST_KEY=_TCP_ROUTING FORMAT=errorGroup [successRouting] REGEX=. DEST_KEY=_TCP_ROUTING FORMAT=successGroup What could be the problem?