All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

| sort 0 ID
@gcusello  Thank you for all of your help!  I apologize for the lag in responding to you.  I have been very busy on another project.  Your suggestion was spot-on, though, so I wanted to be sure to th... See more...
@gcusello  Thank you for all of your help!  I apologize for the lag in responding to you.  I have been very busy on another project.  Your suggestion was spot-on, though, so I wanted to be sure to thank you for your assistance.
We are in the process of upgrading our 8.2.6 Splunk distributed environment to 9.1.0 I would like to ensure that all of our apps and add-ons are 9.1 compatible (and make sure this is no python 2 cod... See more...
We are in the process of upgrading our 8.2.6 Splunk distributed environment to 9.1.0 I would like to ensure that all of our apps and add-ons are 9.1 compatible (and make sure this is no python 2 code) - is there a SPL search i can run that can help me identify apps or add-ons that need to be upgraded? I have installed the readiness app, however it doesn't seem to capture all of our apps and add-ons in the environment.  
That's exactly what I needed!!!  Thank you very much. 
Hi @Helios , let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
I believe you can do that.
@ITWhisperer need to see filter out/in result to decide. All fields extracted already. need keep the events with T[A].
Index Size is 5.3G vs 1.6G Raw Data: Raw data  Index on Splunk   This is also affecting our licensing plans as well.  This is way bigger than anticipated.  I thought that 110% ro maybe ev... See more...
Index Size is 5.3G vs 1.6G Raw Data: Raw data  Index on Splunk   This is also affecting our licensing plans as well.  This is way bigger than anticipated.  I thought that 110% ro maybe even 180% but not 400%.  Somethings off.  
We followed the steps in https://docs.splunk.com/Documentation/DM/1.8.1/User/AWSAbout to onboard the data from a single AWS account. During the process onboarding data, AWS account details are input... See more...
We followed the steps in https://docs.splunk.com/Documentation/DM/1.8.1/User/AWSAbout to onboard the data from a single AWS account. During the process onboarding data, AWS account details are input using the UI, following which Splunk generates the Cloud Formation Template. This template has the a DM_ID, DM_Name and few indexes which Splunk generates. Does Splunk have API to script this? Our DevOps team wants to automate this process. ps: I was unable to find this in the API documentation.
Hello, I think this is a simple answer but I'm not able to find a solution.  I created a lookup table that looks like this (but of course has more info): Cidr, ip_address 24, 99.99.99.99/24 25... See more...
Hello, I think this is a simple answer but I'm not able to find a solution.  I created a lookup table that looks like this (but of course has more info): Cidr, ip_address 24, 99.99.99.99/24 25, 100.100.100/25 I only included the Cidr column as I read that the lookup table needs at least 2 columns, but I do not use it. Let me know if I should! I am trying to find source ips that match with the ip_address in my lookup table.    index="index1" [|inputlookup lookup | rename ip_address as src_ip] I have ensured that Advanced Settings -> Match -> CIDR(ip_address) When the query is ran, no matches are found, but I know that there is traffic from the addresses. What am I overlooking?
Okay Thank you
wanted to reach out for help regarding an issue we have been experiencing on one of our customers.  We build an app that exports events from a standalone customer using the Splunk Enterprise instance... See more...
wanted to reach out for help regarding an issue we have been experiencing on one of our customers.  We build an app that exports events from a standalone customer using the Splunk Enterprise instance.  We have that box gather the logs and hold them until it can be exported out of the box manually. We used the savedseaches.conf file to schedule a search query script (export.py) to pull events.  The problem is that on this particular customer he is only getting like 11 minutes worth of logs, but the file is scheduled to pull all index events from lets say 3:30pm-4:30pm, but the events start loading only from 4:19pm-4:30pm.   It does this across all times consistently. example, missing the first like 49 minutes of events:  4:19pm-430pm 5:19pm-5:30pm 6:19pm-6:30pm  We have a export.py script that goes out and gathers all index=* events according to the cron specified. savedsearches.conf cron_schedule = 30 */1 * * *  enablesched = 1 dispatch.ttl = 1800 allow_skew = 10m search = | export disable = 0  To compensate for lags, we build into the |export.py script to pull the events 1 hour prior so like.  This is part of the script dealing with the specific search. now = str(time.time()-3600).split(".")[0] query = "search index=* earliest=" + last_scan + "  lastest=" + now + "  once script is done, it creates a timestamp in a file of the now in epoch time, which is used for the next schedule time. Any help would be appreciated
If I understand you correctly the query should work like this:   index=gbts-vconnection sourcetype=VMWareVDM_debug "onEvent: DISCONNECTED" (host=Host1 OR host=host2) | rex field=_raw "(?ms)^(?:[^:\... See more...
If I understand you correctly the query should work like this:   index=gbts-vconnection sourcetype=VMWareVDM_debug "onEvent: DISCONNECTED" (host=Host1 OR host=host2) | rex field=_raw "(?ms)^(?:[^:\\n]*:){5}(?P<IONS>[^;]+)(?:[^:\\n]*:){8}(?P<Device>[^;]+)(?:[^;\\n]*;){4}\\w+:(?P<VDI>\\w+)" offset_field=_extracted_fields_bounds | eventstats count as failed_count by Device IONS | where failed_count>=10 | timechart dc(IONS) as IONS span=1d   This will show you the amount of user with more than 10 failed logons on each day.
I had been struggling with the same problem. After a lot of experimentation with different ideas and inspecting SAML payloads. My two main findings as best as I can tell are: When you configure a G... See more...
I had been struggling with the same problem. After a lot of experimentation with different ideas and inspecting SAML payloads. My two main findings as best as I can tell are: When you configure a Google Groups mapping in Google SAML configuration, Google will send the group name as an attribute identically to if it were an attribute set up in the attribute mapping. When Splunk receives a SAML assertion with a role attribute, I think it will try to match it against roles as well as SAML groups. Though in my case all of the role attributes I use are SAML group names, so I cannot confirm that it will match the "role" attribute against an actual role name. But also crucially, when you update Google SAML configurations, it can take 5-10 minutes for the update to "go live". So watch the SAML assertions that you are actually sending to Splunk as you experiment, because otherwise you'll make a changes and even if you get it right it'll appear to not work, you'll make more changes, and suddenly things work, but actually the working configuration was n attempts ago, and it will break itself as it slowly updates to your later configuration attempts, and all you'll know is that something you tried at some point over the last however long was correct. So the net result: Set the "App attribute" to "role", exactly like you did in your screenshot. If you have created a role in splunk whose name is the same as your Google group, you're done. If your Google group has a different name than your role, then set up a SAML group in Splunk with the same name as your google group and assign it the role you want. Splunk will lowercase the group name, that's fine, it'll still match. As a result, you can actually use both (e.g. a group to grant "user" access, and individual user attributes to grant admin access) In my case, I already had a Google group called "Engineering" that I wanted to set up with the "user" role. Here are my configs: Splunk: 1. Configure SAML groups with names corresponding to your Google groups 2. Configure your Google SAML configuration. If you plan to use both user attributes and Google groups, set both a user attribute and a Group membership, both pointing to the "role" App attribute. If you only plan on using groups, you can omit the user attribute. In my case, as you can see from my Splunk config above, I want the "Engineering" google group to all have "user" access to Splunk: 3. If you want to specify role overrides, set them as you did before: 4. If it isn't working, decode and review the SAML assertion that Splunk is receiving. It can take a surprisingly long time for changes made in Google's SAML configuration to go "live". You will likely observe that you're passing along a SAML assertion that does not reflect your most recent Google configuration changes -- if that's the case, just wait a while and try again in a bit.
Hey @Southy567, To achieve the use case, you can have alert throttling enabled. You can find the throttle checkbox in the below screenshot. Once you check the throttle checkbox, you can suppres... See more...
Hey @Southy567, To achieve the use case, you can have alert throttling enabled. You can find the throttle checkbox in the below screenshot. Once you check the throttle checkbox, you can suppress the alerting for 4 hours as mentioned in the below screenshot   So if the alert is suppressed for 4 hours, the SNOW ticket will not be created for the users that already have s SNOW ticket raised. After 4 hours, the alerting should resume as normal for the same set of users.   Thanks, Tejas. --- If the above solution helps, an upvote is appreciated.
I was able to find this search that gives me the number of users(IONS) who disconnected 10 or more times however it gives me the total based on time.  I would like to display a daily number for 30 da... See more...
I was able to find this search that gives me the number of users(IONS) who disconnected 10 or more times however it gives me the total based on time.  I would like to display a daily number for 30 days in a line chart.  For example Monday there were 10 users who disconnected over 10 time and so on for the rest of week. I can't seem to get the timechart to work with this: index=gbts-vconnection sourcetype=VMWareVDM_debug "onEvent: DISCONNECTED" (host=Host1 OR host=host2) | rex field=_raw "(?ms)^(?:[^:\\n]*:){5}(?P<IONS>[^;]+)(?:[^:\\n]*:){8}(?P<Device>[^;]+)(?:[^;\\n]*;){4}\\w+:(?P<VDI>\\w+)" offset_field=_extracted_fields_bounds | stats count by Device IONS | where count >= 10 | appendpipe [|stats count as IONS | eval Device="Total"]
Okay, thank you. I am quite good with the default JSON but I’ll take a look and maybe have some tuning. Thanks again
Have you asked the CyberArk admins to add your SH IP to the approved list for that safe?
_json is the generic format that's applied to events if Splunk notices them as being JSON so it's not the best idea. It's best to be as specific as possible with your definition. (ideally sourcetype... See more...
_json is the generic format that's applied to events if Splunk notices them as being JSON so it's not the best idea. It's best to be as specific as possible with your definition. (ideally sourcetype should have a set of parameters responsible for proper breaking and time recognition defined - so called Great Eight). So while _json is fairly generic you should have your specific settings for - for example - time extraction. Another thing is that even though the format of the event might just be JSON, specific sourcetypes can have different additional aliases or calculated fields defined (for example for CIM compliance). So you want to have your events "pinned" to the specific sourcetype instead of the generic _json.
Well, I think the part I’ve misunderstood is via normal props.conf rules  I thought all events that the transform is applied to was about the real transform (ie the REGEX + FORMAT ) so I didn’t u... See more...
Well, I think the part I’ve misunderstood is via normal props.conf rules  I thought all events that the transform is applied to was about the real transform (ie the REGEX + FORMAT ) so I didn’t understand why non-matching events (as per REGEX) would get cloned too. Now I get it, I’ll try and find a way to work around that later. About the cloning of _json into a new sourcetype and using that clone into my config, what would be the gain (apart for best practice’s sake) ?