All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunk team, do you know if it's possible to do a splunk search in a python script? taking into account the selection of the desired period (last week, last month ...). for example for a simp... See more...
Hello Splunk team, do you know if it's possible to do a splunk search in a python script? taking into account the selection of the desired period (last week, last month ...). for example for a simple search : | append [search sourcetype=ESP_Histo env="Web" service="Application" apps="Web Sharepoints" fn="*" managed_entity="*- URL" sampler="SHAREPOINT - URL" ID_EVENT | eval Applications=case( apps like "%Web Sharepoints%","WEB SHAREPOINTS") | `transactions` ] | eval max_time=if(info_max_time == "+Infinity", now(), info_max_time) | eval min_time=if(info_min_time == "0.000", 1577836800, info_min_time) | eval periode1=max_time-min_time | stats sum(duration) AS durationindispo by Applications, periode1 | outputcsv append=true override_if_empty=false web_search
Hi there, i'm using Splunk Enterprise 7.0.3 and Splunk DB Connect 3.1.3. i have this Data Lab Output configuration using real time saved search to export Splunk logs to a MySQL DB and it's working ... See more...
Hi there, i'm using Splunk Enterprise 7.0.3 and Splunk DB Connect 3.1.3. i have this Data Lab Output configuration using real time saved search to export Splunk logs to a MySQL DB and it's working just fine until recently. What i'm experiencing now is the DB connect doesn't export some of the data from Splunk to DB (some of it is missing from the DB). The issue is resolved when i'm using scheduled search with cron schedule but i'd like to use real time saved search again if possible. I hope i can get a solution. Thanks in advance.   EDIT: i forgot to tell you that the Data Lab Output actually stopped exporting from Splunk Log at all approximately 2 weeks ago and i changed the real time saved search timerange from all time real time to all time 15 minute window and then it works again but the issue i explained in the beginning happens.
Hi everyone,   I am trying to remove partial duplicate in the same field, but couldn't find a solution yet. For instance I have for the same field these values: http://www.g http://www.go http:... See more...
Hi everyone,   I am trying to remove partial duplicate in the same field, but couldn't find a solution yet. For instance I have for the same field these values: http://www.g http://www.go http://www.google.com   I would like to only keep the value (http://www.google.com), I tried to use dedup and mvfilter: eval url_in_parameter=mvfilter(!url_in_parameter LIKE url_in_parameter*) I am still a begginer on Splunk and couldn't find any similar topics on internet.   Thanks for your help and have a good day.
I have two fields skill1 and skill2 skill2:             skill1:           Both these queries are producing results:   timechart span=1d count by skill1 timechart span=1d coun... See more...
I have two fields skill1 and skill2 skill2:             skill1:           Both these queries are producing results:   timechart span=1d count by skill1 timechart span=1d count by skill2   I want to create a separate variable skill which contains difference of skill1's values and skill2's values and create a timechart out of it. I tried doing:   timechart span=1d count by skill1-skill2   But it's not working. Any help would be appreciated!
Our Splunk deployment is on-prem and we have automated deployment based on app updates. Users create a PR, make updates and push to bitbucket. Automated tests are done and if everything is OK the cha... See more...
Our Splunk deployment is on-prem and we have automated deployment based on app updates. Users create a PR, make updates and push to bitbucket. Automated tests are done and if everything is OK the changes are in production in the hour. We are considering the move to Splunk Cloud but are worried about the manual app vetting process. That process seems cumbersome and might mean that the 5-10 changes across multiple apps we have each week will form an endless backlog. How are you finding the app vetting process? Is it working OK or are you finding it painful? This is a question aimed at large corporates who have moved from on-prem or BYOL to Splunk cloud.
Hi , i have 2 queries . (index=abc OR index=def) category= * OR NOT blocked =0 AND NOT blocked =2 |rex field=index "(?<Local_Market>[^cita]\w.*?)_" | stats count(Local_Market) as Blocked by Local... See more...
Hi , i have 2 queries . (index=abc OR index=def) category= * OR NOT blocked =0 AND NOT blocked =2 |rex field=index "(?<Local_Market>[^cita]\w.*?)_" | stats count(Local_Market) as Blocked by Local_Market | addcoltotals col=t labelfield=Local_Market label="Total" | append [search (index=abc  OR index=def) blocked =0 | rex field=index "(?<Local_Market>\w.*?)_" | stats count as Detected by Local_Market | addcoltotals col=t labelfield=Local_Market label="Total"] | stats values(*) as * by Local_Market | transpose 0 header_field=Local_Market column_name=Local_Market | addinfo | eval date=info_min_time | fieldformat date=strftime(date,"%m-%d-%Y") | fields - info_* 1> above query is giving me the correct date time ..but when i am scheduling this as report its coming  as Epoch time in csv file .. 2>  also how can we get date field in first column instead of last column without disturbing the other fields .. thanks in advance
The internal metrics logs has information on tcp in and out. This is useful to see the traffic profile between Splunk tiers. We planning our migration to the cloud and this will help with sizing the ... See more...
The internal metrics logs has information on tcp in and out. This is useful to see the traffic profile between Splunk tiers. We planning our migration to the cloud and this will help with sizing the cloud pipe. Has anyone created dashboards with this information they could share with us?
Hi All, My customer want to collect mail transmission/reception logs using the "Microsoft O365 Email Add-on for Splunk" App. But I have no data received and getting error as below.(I attached the f... See more...
Hi All, My customer want to collect mail transmission/reception logs using the "Microsoft O365 Email Add-on for Splunk" App. But I have no data received and getting error as below.(I attached the full error logs) ----------------------------- 2021-09-15 11:03:56,355 ERROR pid=50226 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA_microsoft_o365_email_add_on_for_splunk/bin/ta_microsoft_o365_email_add_on_for_splunk/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA_microsoft_o365_email_add_on_for_splunk/bin/o365_email.py", line 136, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA_microsoft_o365_email_add_on_for_splunk/bin/input_module_o365_email.py", line 182, in collect_events messages.append(messages_response['value']) KeyError: 'value' ----------------------------- Thank you in advance. Best regards, Bob Hwang
Hello splunk  I want to create a v  model structure  in dashboard I am not aware of it can you please help me with the it.I am posting a picture  below. How I want to create dashboard. Th... See more...
Hello splunk  I want to create a v  model structure  in dashboard I am not aware of it can you please help me with the it.I am posting a picture  below. How I want to create dashboard. Thanks in advance Renuka  
Hello All. In indexer clustering , one peer is not searchable  and status is down . What is the process to fix it please anyone help me  this is the big issue we are facing .
When you get an incident in splunk-ES, the notable is often populated with 'additional fields'. some of these custom, some out of the box. Im looking to see what fields would be displayed for a notab... See more...
When you get an incident in splunk-ES, the notable is often populated with 'additional fields'. some of these custom, some out of the box. Im looking to see what fields would be displayed for a notable from either searching the notable macro or the api if need be. searching the notable macro, I often i get 100+ fields for a notable, but maybe only 15 are displayed in the notable itself, where some other notable may only have 5 displayed. Is there a way to do a search that indicates wich fields would be displayed in the 'additional fields' of the notable? for reference the additional fields im talking about are mentioned here under 'Add a field to the notable event details': https://docs.splunk.com/Documentation/ES/6.6.0/Admin/Customizenotables
Hi there, I'm seeing a strange problem with version 8.0.8 I have a search to build a lookup table one time only, which has to look back 60 days in order to correctly populate. I'm also working on h... See more...
Hi there, I'm seeing a strange problem with version 8.0.8 I have a search to build a lookup table one time only, which has to look back 60 days in order to correctly populate. I'm also working on hourly updates for this, but that part is much less of a problem. I do not have access to summary indexing here, so don't bother mentioning it! It so happens that the admins of this site have enabled tsidx reduction so I'm seeing this warning:   Search on most recent data has completed. Expect slower search speeds as we search the reduced buckets.   Not in itself a huge problem as in the search box I eventually see all the available results. HOWEVER... When I run this search into outputlookup, it doesn't seem to wait for all the results to render and goes off and writes the lookup table without (very annoyingly) waiting for one particular column to appear. Is this a bug? Has anyone else seen this? Any way around it? It seems to be entirely consistent and I want to stop hammering the splunk instance with this very expensive search! Cheers, Charles
Hello All, I have set up the Splunk Add-On and Splunk App for Unix and Linux. Data is flowing properly however I am having an issue with alerts. I am trying to set up alerts for various things to s... See more...
Hello All, I have set up the Splunk Add-On and Splunk App for Unix and Linux. Data is flowing properly however I am having an issue with alerts. I am trying to set up alerts for various things to slack. I have the first alert on memory working. I set it to 1 min real-time and it seems to work just fine. This is the working query:    `os_index` source=vmstat | where max(memUsedPct) > 90 | stats max(memUsedPct) by host   However, when I try to do the same for disk, it does not work. I have tried expanding to 5min and 30min real-time windows but the only way I get data to show up in this query is by removing the where clause. I also tried using something like latest() instead of max() but that didn't help. What am I doing wrong here?   `os_index` source=df | where max(UsePct) > 10 | stats max(UsePct) by host   Thank you, jackjack
I'm trying to use a link list dropdown to fill in URL's of specific csv files from the splunk lookup editor app into a dashboard panel.  I've followed the instructions to re-enable iframes in splunk ... See more...
I'm trying to use a link list dropdown to fill in URL's of specific csv files from the splunk lookup editor app into a dashboard panel.  I've followed the instructions to re-enable iframes in splunk 8.0x+ and the frame itself forms, but its stuck saying "loading"  After doing some web browser debugging, it appears this is because the lookup editor app is calling javascript and there is an HTML tag that sets class="no-js" but I'm not a very good HTML debugger and I can't tell if this is being done in the parent apps css or in the child apps css.  I think however, if I could get the iframe to support the javascript I should be in good shape.  Its all the same splunk instance, but I haven't had any luck yet.  Any help is greatly appreciated!
I have the following log !!! --- HUB ctxsdc1cvdi013.za.sbicdirectory.com:443 is unavailable --- !!! user='molefe_user' password='molefe' quota='user' host='002329bvpc123cw.branches.sbicdirectory... See more...
I have the following log !!! --- HUB ctxsdc1cvdi013.za.sbicdirectory.com:443 is unavailable --- !!! user='molefe_user' password='molefe' quota='user' host='002329bvpc123cw.branches.sbicdirectory.com' port='443' count='1' !!! --- HUB 002329bvpc123cw.branches.sbicdirectory.com:443 is unavailable --- !!! host='005558bvpc5ce4w.za.sbicdirectory.com' port='443' count='1' !!! --- HUB 005558bvpc5ce4w.za.sbicdirectory.com:443 is unavailable --- !!! host='41360jnbpbb758w.za.sbicdirectory.com' port='443' count='1' !!! --- HUB 41360jnbpbb758w.za.sbicdirectory.com:443 is unavailable --- !!! host='48149jnbpbb041w.za.sbicdirectory.com' port='443' count='1' !!! --- HUB 48149jnbpbb041w.za.sbicdirectory.com:443 is unavailable --- !!! user='pips_lvl_one_user' password='pips_lvl_one' quota='user'   I have above log and I'm struggling to extract the colored items ctxsdc1cvdi013.za.sbicdirectory.com = as workstation ID is unavailable = as  status molefe = as quota
I am trying to control ingest rate into Splunk Cloud. I have some firewalls that are very chatty. The firewalls themselves can only point to a single Syslog destination.  For security and complian... See more...
I am trying to control ingest rate into Splunk Cloud. I have some firewalls that are very chatty. The firewalls themselves can only point to a single Syslog destination.  For security and compliance issues, I need to retain and store ALL logs for one year. We have an appliance that forwards to our SOC and it basically has unlimited storage. For reporting and alerting, I need to send most messages into Splunk Cloud. Logging is controlled by ACL and in the syslog messages, you see ACLs. Based on how my firewall is configured, there are a few ACLs that are chattier than others, for example, the implicit deny ACL is CONSTANTLY chatting. The only time I really need to see this ACL in Splunk logs, is when I am troubleshooting however the SOC wants to see this ACL all the time. The implicit deny rule accounts for about 30% of all syslog data generated. Ideally I when I write to disk on the Syslog-NG server, I would like to drop the implicit deny logs so that when the Universal Forwarder reads the log, it won't be sending that unneeded 30% overhead (the implicit deny rule accounts for about 20-50 gigs of ingest a day alone).  My initial log_path statement looks like the following:       log { source(s_udp514); filter(f_device); destination(d_socappliance); destination(d_disk); flags(final); };         I then tried 2 different log path statements to try and separate the traffic so that I can apply the message drop filter:       filter f_device { ( host("192.168.1.1") or host("fqdn.device.com") ) }; filter f_device_msgdrop { ( not match("aclID=0" value(MESSAGE)); ) }; log { source(s_udp514); filter(f_device); destination(d_socappliance); flags(final); }; log { source(s_udp514); filter(f_device);filter(f_device_msgdrop); destination(d_disk); flags(final); };         aclID=0 is the ACL ID of the implicit deny rule. The concept here is that if the string "aclID=0" exists in the syslog message, I don't want to write it to disk and therefore the Universal Forwarder never sees in in the log file and it doesn't get sent to the cloud.  When I use the method above, I end up disabling logging to disk. I haven't verified if logging to the SOC appliance stops as well. Any thoughts on how to tackle this? 
We have developed a Splunk client app using Splunk Java SDK. A particular Splunk server installation has a few indexes with data stored in them. When getIndexes() is called in com.splunk.Service cl... See more...
We have developed a Splunk client app using Splunk Java SDK. A particular Splunk server installation has a few indexes with data stored in them. When getIndexes() is called in com.splunk.Service class object, it returns an empty CollectionIndexes. However, on the same Service object, we can search data in the indexes and it is certain that those indexes do exist. IndexCollection indexCollection = service.getIndexes(); boolean indexNotFound = indexCollection.isEmpty(); I got the same results with Java SDK 1.5.0.0 and 1.6.5.0. This is happening in a particular Splunk server installation. For other Splunk server installations, we do not have any issue. Under what condition could this issue happen? What can I do to troubleshoot?
I am building a search that will based on a table of products with different versions. I need to run an initial search that will return the version with most hosts ("Mainstream") and use that version... See more...
I am building a search that will based on a table of products with different versions. I need to run an initial search that will return the version with most hosts ("Mainstream") and use that version to compare everything else against in order to determine if its less than/greater than (older/newer). I am currently using a foreach command to send each category of product to a subsearch which then grabs mainstream and return it so I can compare each event's version to this mainstream version. I am having extreme difficulty passing the field to the subsearch and filtering out the category by using something like a Where command without setting off confusing errors that don't really make any sense. ("Eval command malformed"). The logic of the query works when I am not using a '<<field>>' token but soon as I try pass a token with a where command within subsearch, it falls apart. I am a Splunk newbie so maybe I am missing something obvious, please advise:   | inputlookup Lookup_Table.csv | eval Category = OSType. "-" .ProductName | stats values(ProductVersion) AS Version values(LifeCycleStatus) AS Status by Category | foreach Category [eval newLifecycleStatus=case(Version< [| inputlookup Lookup_Table.csv.csv | eval Category = OSType. "-" .ProductName | where Category =='<<FIELD>>' | sort -product_count | head 1 | eval Version="\"".ProductVersion."\"" | return $Version], "Declining")]    I changed this code to something like this with no luck because I cant filter the results without a where statement:   | inputlookup Lookup_Table.csv | stats values(ProductVersion) AS Version values(LifeCycleStatus) AS Status by ProductCode | foreach ProductCode [eval newLifecycleStatus=case(Version==[| inputlookup Lookup_Table.csv | eval NewProductCode=tostring('<<FIELD>>') | sort -product_count | head 1 | eval ProductVersion="\"".ProductVersion."\"" | return $ProductVersion], "Emerging")]    
Hi,   We have this same log entry, 2021-09-14 13:20:08.325 DEBUG [,88538eaa548c8b64,88538eaa548c8b64,true] 1 --- [tp1989219205-24] m.m.a.RequestResponseBodyMethodProcessor : Writing ["ping"] 2021... See more...
Hi,   We have this same log entry, 2021-09-14 13:20:08.325 DEBUG [,88538eaa548c8b64,88538eaa548c8b64,true] 1 --- [tp1989219205-24] m.m.a.RequestResponseBodyMethodProcessor : Writing ["ping"] 2021-09-14 13:20:08.325 DEBUG [,88538eaa548c8b64,88538eaa548c8b64,true] 1 --- [tp1989219205-24] m.m.a.RequestResponseBodyMethodProcessor : Using 'text/plain', given [*/*] and supported [text/plain, */*, text/plain, */*, application/json, application/*+json, application/json, application/*+json] I want to mask MethodProcessor string so it is not visible in logs.  Can someone provide the regex i can use?    
Hello, I currently have a search over index_A that runs a sub-search from index_B looking to match a field (field_B) from index_B to any log within index_A.  The search works great but the only frust... See more...
Hello, I currently have a search over index_A that runs a sub-search from index_B looking to match a field (field_B) from index_B to any log within index_A.  The search works great but the only frustration is not knowing what field value that field_B held as all of the tabled results come from index_A.  Is there a way I can join that matched field_B to the results at the end of the search?  Here is my current search and thanks for anyone that has the time to help me with this! index=index_A [search index=index_B | fields field_B | rename field_B as query] | table field_A field_A1 field_A2 field_A3