All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Jose.Macias, I have some followup questions, but given the sensitive nature of the info, I'm going to send you a Community Private Message. Please read it and respond there. 
Has the data already been deleted?  If so, submit a request to Splunk Cloud Support.  They may be able to recover the data, but be prepared for them to say "no". If the data has not yet been deleted... See more...
Has the data already been deleted?  If so, submit a request to Splunk Cloud Support.  They may be able to recover the data, but be prepared for them to say "no". If the data has not yet been deleted then how to re-index data depends on how it was onboarded in the first place, whether the original source is still available, and what portion (if not the whole) of the data must be re-indexed.
Hi @Dietrich.Meier, Thanks for sharing this. I've shared it with the people who I think should see it. 
Hello @Jian.Zhang, Giving this a bump. I need more clarity from you on exactly what you want to be deleted. 
Is there a solution for free licenses?
So a few questions: What is the version number of the Windows TA are you using on your search head? What version number of the Windows TA on your UF for this data? What does your inputs.conf look l... See more...
So a few questions: What is the version number of the Windows TA are you using on your search head? What version number of the Windows TA on your UF for this data? What does your inputs.conf look like for the following stanza? [WinEventLog://Security] Like @PickleRick said in his comment, this doesn't look like a standard Windows Event Log.
1.  Not exactly.  Here's what limits.conf.spec says about the fishbucket size limit: file_tracking_db_threshold_mb = <integer> * The size, in megabytes, at which point the file tracking database, ... See more...
1.  Not exactly.  Here's what limits.conf.spec says about the fishbucket size limit: file_tracking_db_threshold_mb = <integer> * The size, in megabytes, at which point the file tracking database, otherwise known as the "fishbucket" or "btree", rolls over to a new file. * The rollover process is as follows: * After the fishbucket reaches 'file_tracking_db_threshold_mb' megabytes in size, a new database file is created. * From this point forward, the processor writes new entries to the new database. * Initially, the processor attempts to read entries from the new database, but upon failure, falls back to the old database. * Successful reads from the old database are written to the new database. Notice the old database file stays around even when a new database file is created.  That implies the file_tracking_db_threshold_mb value is at least doubled.  When the database is saved, it's doubled again for each file (new and old) so 4x. 2. I see what you mean, although this is true for any TA, not just nmon.  The more input files you have, the more that must be tracked in the fishbucket.
You can't have multiple columns with the same name so try this index= source IN ("") "uniqObjectIds" OR "data retrieved for Ids" | eval PST=_time-28800 | eval PST_TIME=strftime(PST, "%Y-%d-%m %H:%M:... See more...
You can't have multiple columns with the same name so try this index= source IN ("") "uniqObjectIds" OR "data retrieved for Ids" | eval PST=_time-28800 | eval PST_TIME=strftime(PST, "%Y-%d-%m %H:%M:%S") | spath output=uniqObjectIds path=uniqObjectIds{} |where isnotnull(uniqObjectIds) | spath output=uniqueRetrievedIds path=uniqueRetrievedIds{}| stats values(*) as * by _raw | table uniqObjectIds,uniqObjectIdsCount,uniqObjectIds{},PST_TIME | sort- PST_TIME | appendcols [search index= source IN ("") "data retrieved for Ids"| eval PST=_time-28800 | eval PST_TIME2=strftime(PST, "%Y-%d-%m %H:%M:%S") | spath output=uniqueRetrievedIds path=uniqueRetrievedIds{} |where isnotnull(uniqueRetrievedIds)| stats values(*) as * by _raw | table uniqueRetrievedIds{},uniqueRetrievedIds, PST_TIME2 | sort- PST_TIME2 ] | appendcols [search index= source IN ("") "data not found for Ids"| eval PST=_time-28800 | eval PST_TIME3=strftime(PST, "%Y-%d-%m %H:%M:%S") | spath output=dataNotFoundIds path=dataNotFoundIds{}|where isnotnull(dataNotFoundIds) | stats values(*) as * by _raw | table dataNotFoundIds{},dataNotFoundIdsCount, PST_TIME3 | sort- PST_TIME3 ]
Hi @SplunkExplorer , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Compression between Federated Search client and provider is handled by SSL/TLS.
Hello, i am reaching out to ask if there is any way to make the chart that was generated with the scheduled PDF report option look any better.  We have this dashboard:     It looks fine. Ever... See more...
Hello, i am reaching out to ask if there is any way to make the chart that was generated with the scheduled PDF report option look any better.  We have this dashboard:     It looks fine. Everything looks nice and clean. When we use the schedule PDF, and we generate a pdf, it does not look good. It looks like  As an fyi, the above SS included both types for chart formatting, one had the values in the middle, and one had values above. Both look good in the dashboard, but neither look good in the pdf.  Is there a way to edit the chart bars so they have more space, or to edit the size of the numerals above the bars?    Is there an app that allows better editing of PDF within splunk? I feel like we have done everything we can to make the pdf look good, but we can not seem to be able to get the numbers to look good on the pdf.    Thank you for any guidance.  
Hi Splunkers, in the end I worked with Support and it figured out the reason of not working regex: when applied to a monitor input on a UF, like that case, the blacklist parameter is applied to file ... See more...
Hi Splunkers, in the end I worked with Support and it figured out the reason of not working regex: when applied to a monitor input on a UF, like that case, the blacklist parameter is applied to file name and/or path; our purpose is to filter out based on file payload and, to achieve this, we must work on HF, changing inputs.conf or both props and transforms.conf.
One or more of the forwarders are either not using the Deployment Server, are not in a serverclass in the DS, or have outputs.conf set in $SPLUNK_HOME/etc/system/local (which overrides settings from ... See more...
One or more of the forwarders are either not using the Deployment Server, are not in a serverclass in the DS, or have outputs.conf set in $SPLUNK_HOME/etc/system/local (which overrides settings from the DS).  The forwarder(s) still has the old indexer configured and is not getting the new indexer list from anywhere. You'll have to sign in to the forwarders in question and repair them manually.  Move settings from $SPLUNK_HOME/etc/system/local to custom apps.  Ensure they are consulting the DS and are represented in a serverclass on the DS.
Hi, can someone help me? I'm trying to call a webhook on AWX Tower (Ansible) using the Add-On Builder. This is my script but it doesn't work, but I don't get an error message either:   # encoding... See more...
Hi, can someone help me? I'm trying to call a webhook on AWX Tower (Ansible) using the Add-On Builder. This is my script but it doesn't work, but I don't get an error message either:   # encoding = utf-8 def process_event(helper, *args, **kwargs): """ # IMPORTANT # Do not remove the anchor macro:start and macro:end lines. # These lines are used to generate sample code. If they are # removed, the sample code will not be updated when configurations # are updated. [sample_code_macro:start] # The following example gets the alert action parameters and prints them to the log machine = helper.get_param("machine") helper.log_info("machine={}".format(machine)) # The following example adds two sample events ("hello", "world") # and writes them to Splunk # NOTE: Call helper.writeevents() only once after all events # have been added helper.addevent("hello", sourcetype="sample_sourcetype") helper.addevent("world", sourcetype="sample_sourcetype") helper.writeevents(index="summary", host="localhost", source="localhost") # The following example gets the events that trigger the alert events = helper.get_events() for event in events: helper.log_info("event={}".format(event)) # helper.settings is a dict that includes environment configuration # Example usage: helper.settings["server_uri"] helper.log_info("server_uri={}".format(helper.settings["server_uri"])) [sample_code_macro:end] """ helper.log_info("Alert action awx_webhooks started.") # TODO: Implement your alert action logic here import requests url = 'https://<AWX-URL>/api/v2/job_templates/272/gitlab/' headers = {'Authorization': 'X-Gitlab-Token: <MYTOKEN>'} response = requests.post(url, headers=headers, verify=False) print(response.status_code) print(response.text)  
HI @ITWhisperer    I want all three query combined and display their results in one table based on entity:suppliedMaterial   uniqObjectIds   uniqObjectIdsCount PST_TIME uniqueRetrievedI... See more...
HI @ITWhisperer    I want all three query combined and display their results in one table based on entity:suppliedMaterial   uniqObjectIds   uniqObjectIdsCount PST_TIME uniqueRetrievedIds{} uniqueRetrievedIds PST_TIME dataNotFoundIds dataNotFoundIdsCount PST_TIME       122598 268817 88888888888 99999999999999999 abc 5 122598 268817 88888888888 99999999999999999 abc 5 122598 268817 88888888888 99999999999999999 abc 2023-08-11 06:38:01 122598 268817 122598 268817 2023-08-11 06:38:01       88888888888 99999999999999999 abc 3 122598 268817 88888888888 99999999999999999 abc 5 122598 268817 88888888888 99999999999999999 abc 2023-08-11 06:38:01 122598 268817 122598 268817 2023-08-11 06:37:44       similar to above similar to above similar to above similar to above  ssimilar to above  similar to above similar to above similar to above similar to above     similar to above     ....                      
Almost.  The append must use the same field names as the main search (I used field names from your example output). | eval appType = case(SourceName="Foo \"bar(\"", "app1", SourceName="Foo \"quill(\... See more...
Almost.  The append must use the same field names as the main search (I used field names from your example output). | eval appType = case(SourceName="Foo \"bar(\"", "app1", SourceName="Foo \"quill(\"", "app2", SourceName="Foo", "app3", source=abcde, "app4", sourcetype=windows AND eventcode=11111, "app5", 1==1, "other") | stats count by appType | append [ makeresults format=csv data="appType,count app1,0 app2,0 app3,0 app4,0 app5,0"] | stats sum(count) as count by appType
We covered that yesterday.  You need a regular expression that matches the field name and value. [mysourcetype] SEDCMD-rm_cs2 = s/(cs2=.*?(cs|\s*$))/\2/ SEDCMD-rm_cs2Label = s/(cs2Label=.*?(cs|\s*$)... See more...
We covered that yesterday.  You need a regular expression that matches the field name and value. [mysourcetype] SEDCMD-rm_cs2 = s/(cs2=.*?(cs|\s*$))/\2/ SEDCMD-rm_cs2Label = s/(cs2Label=.*?(cs|\s*$))/\2/  The regexes look for either "cs2" or "cs2Label" followed by any characters up to the next field or the end of the event.  They replace it with the following field to avoid breaking that field.
What would your expected result look like?
@richgalloway  so  combining your responses, something like this? | eval appType = case(SourceName="Foo \"bar(\"", "app1", SourceName="Foo \"quill(\"", "app2", SourceName="Foo", "app3", source... See more...
@richgalloway  so  combining your responses, something like this? | eval appType = case(SourceName="Foo \"bar(\"", "app1", SourceName="Foo \"quill(\"", "app2", SourceName="Foo", "app3", source=abcde, "app4", sourcetype=windows AND eventcode=11111, "app5", 1==1, "other") | stats count by appType | append [ makeresults format=csv data="app_name,error_count app1,0 app2,0 app3,0 app4,0 app5,0"] | stats sum(error_count) as error_count by app_name