All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, We scheduled a search that alerts us if we do not receive logs from any of our hosts since >5 minutes. It looks like below: | metadata type=hosts index=* | eval age=now()-lastTime | whe... See more...
Hello, We scheduled a search that alerts us if we do not receive logs from any of our hosts since >5 minutes. It looks like below: | metadata type=hosts index=* | eval age=now()-lastTime | where age>3600 However, there is an issue - it does not work if we partly receive logs from the host (let's say, only 1 sourcetype out of 2). Do you know a way to create the same alert by host AND by sourcetype at the same time? Thanks for the help.
Hi , I have following search string , where Username field is extracted using rex command . Now I want to use a lookup file which has field "user" which matches the extracted "Username" field . ... See more...
Hi , I have following search string , where Username field is extracted using rex command . Now I want to use a lookup file which has field "user" which matches the extracted "Username" field . How can I use lookup commands to match this and get more interested fileds like Office in the search string index="x" | rex field=_raw "(?:Users%5C)(?(.*))(?:%5C(local|input))"
Hi, We have had a "From Address" specified in our email configuration for some time. Recently I realised that we were no longer receiving emails. I could see alerts being sent and in the action... See more...
Hi, We have had a "From Address" specified in our email configuration for some time. Recently I realised that we were no longer receiving emails. I could see alerts being sent and in the actions section it was saying that emails were sent but none were being received. I checked with the administrators of the mail gateway in our company in case they had been put in quarantine but that was not happening. I did a bit of searching and came across this community post which says "The from address field should be empty" so even though we did not have the same issue (having tried with a number of external email addresses) and this contradicts the documentation and our previous experience I tried removing the "From Address" config. Once this was done we started to receive emails. Is there a new issue with configuring the "From Address"? We would like to go back to specifying the "Mail From" address if possible. Regards, Kevin
Hi, splunkers! I have question about menu bar or label bar. i'm not sure how i call. That screenshot is part of top-right in main page. normally we have only one Dashboard named "Applicatio... See more...
Hi, splunkers! I have question about menu bar or label bar. i'm not sure how i call. That screenshot is part of top-right in main page. normally we have only one Dashboard named "Application" in label like this photo . can i also add another dashboard next to "Application" ? which means next to "Application" i want to put another dashboard. is it possible? Thanks for any help
cf_app_id: *****************88 cf_app_name: ***********888 cf_ignored_app: false cf_org_id: ***************88888888888888 cf_org_name: USA.MRCH.APP.UCOMM.CAT cf_origin: fireh... See more...
cf_app_id: *****************88 cf_app_name: ***********888 cf_ignored_app: false cf_org_id: ***************88888888888888 cf_org_name: USA.MRCH.APP.UCOMM.CAT cf_origin: firehose cf_space_id: ***************88888888888888 cf_space_name: deployment: ******************88888888888888 event_type: LogMessage info_splunk_index: null ip: 10.183.40.145 job: diego_cell job_index: acb0c570-3322-4273-9704-22c54adb8894 message_type: OUT msg: date=2020-02-25 06:28:05,346 severity=INFO service=ucom-payment-services partnerId=FP_WALLET_US walletId=FP_SERVER X-B3-TraceId=29157c3fe87e4f3dbfce5608e4ef7b55 X-B3-SpanId=c3e9b5b4f4266d84 logger=c.f.u.p.s.c.RequestPayloadMerger message=validateFundingSource value : true pid=23 thread=http-nio-8080-exec-3 origin: rep source_instance: 1 source_type: APP/PROC/WEB timestamp: 1582630085346462700 Now 1. How could I extract the cf_app_name -> msg field and extract the partnerid, trace id, and( eg: request and response data, which consists of body fields-Method type, etc. 2. Once extracted the Key and value of the fields from cf-app_name, I need to export the key and values in a csv format, which we use for validation in JMeter or with Macros. That's the plan, Can someone with adequate knowledge show us some light on this. Please your support is appreciated.
I really don't want to change our whole infrastructure and the many dashboards we have built over the years - so going to the App For Infrastructure is a last case option for us. Are there any plans ... See more...
I really don't want to change our whole infrastructure and the many dashboards we have built over the years - so going to the App For Infrastructure is a last case option for us. Are there any plans to give us a working version of Splunk App for Unix and Linux 8.0.1?
I have events with JSON in them and I need to know what % of the time each field appears. The fieldset in the events is not consistent, sometimes an event has many, sometimes only a few, the name ... See more...
I have events with JSON in them and I need to know what % of the time each field appears. The fieldset in the events is not consistent, sometimes an event has many, sometimes only a few, the name of each field is unknown at the time of the search So far I have used rex to extract the JSON, and spath to extract the fields from the JSON. I also used fields - so now the events only have the fields I am interested in. Other than the Time field, if I remove that I get no results. How can I generate a table that shows Field a appears: 40% Field b appears: 80% Field c appears 10% So on... The fields are dynamic in name and occurrence, so I don't know the names at the time of the search. Is there some way to accomplish this? Thanks,
Hi all, I am racking my brains on this one. The business has built field names containing years and volumes in the fieldname... (don't ask why). So the field for a single user could contain... See more...
Hi all, I am racking my brains on this one. The business has built field names containing years and volumes in the fieldname... (don't ask why). So the field for a single user could contain 3 fields like the below (and contain either true or false for each year) fieldname multiyearfield19(3) multiyearfield18(9) multiyearfield20(87) TextString false true false What I need to be able to do is search and isolate where any of these multifields = true. Is it possible? As I am struggling... The basic multiyearfield*=true won't work. The rex I am trying to do won't work and joining the fields into a single name won't work as some fields contain true and others false. Any advice is gratefully received.
Hi, I've installed the Alert Manager app and add-on on my Splunk Cloud instance but I can't make it work. I've followed the instructions from the official documentation page (http://docs.alertmana... See more...
Hi, I've installed the Alert Manager app and add-on on my Splunk Cloud instance but I can't make it work. I've followed the instructions from the official documentation page (http://docs.alertmanager.info/en/latest/installation_manual/) but still, it isn't working at all. All my alerts have the alert manager trigger action enabled. All my alerts and apps, even the Incident Posture, are global. Created an index called alert_manager for the alerts and changed it on the macros and the Alert Manager settings. I don't know what else to do. I've been reading and searching a lot, but I couldn't find anything helpful. Let me know if there is something missing for you to help me. Thanks in advance.
Hi Luke, We are planning to use Network toolkit for one of our client. the requirement is they have 8000+ devices to which we need to setup uptime monitoring for. Will the Network Tool Kit app su... See more...
Hi Luke, We are planning to use Network toolkit for one of our client. the requirement is they have 8000+ devices to which we need to setup uptime monitoring for. Will the Network Tool Kit app support these many devices to be pinged at one go or at least with 5mins interval. Also can we schedule maintenance period for those device through ITSI?
Hi All, my data is like below-- I want to extract when it has string ignore numbers 853727-gcplusrspcndb01.usa.corp.ad 10.198.29.5 Output:- 853727-gcplusrspcndb01 10.198.29.5
Hello I'm running splunk with Kubernetes and i want to upgrade splunk version from 7.2.6 to 8.XX Is there something special i need to take into account ? There is any difference between upgra... See more...
Hello I'm running splunk with Kubernetes and i want to upgrade splunk version from 7.2.6 to 8.XX Is there something special i need to take into account ? There is any difference between upgrading vm server or Kubernetes ? Thanks
Hi, when I run a dashboard search, I'm writing an event to an index=test with the collect command. When this search is finished, it sets a token which is used to start a second search. This second... See more...
Hi, when I run a dashboard search, I'm writing an event to an index=test with the collect command. When this search is finished, it sets a token which is used to start a second search. This second is visualizing events from the index=test and should show the latest collected event when it runs. Basically it works, but sometimes the seconds search does not recognize the latest event from index=test. I assume the second search starts to close after the collect command. Is it possible to create some kind of delay between these searches? Or is there another approach to solve this problem? What I could think of, is to append a random append for internal data after the collect command, and remove this results directly afterwards. Just to create a longer duration of the search, so that the trigger for second search is set some seconds later. Thanks in advance
Hi- We have a correlation search that produces a couple of thousand events every 5 minutes. At the same time we are seeing the "Skipped Events Percentage" in the "Event Analytics Monitoring" dash... See more...
Hi- We have a correlation search that produces a couple of thousand events every 5 minutes. At the same time we are seeing the "Skipped Events Percentage" in the "Event Analytics Monitoring" dashboard go to 100%. In addition, I see the KV stores for itsi_notable_group_user and itsi_notable_group_system hit 50,000 (which I subsequently upped in itsi_notable_event_retention.conf to 150000). For some reason episodes are not reliably getting generated. Two questions - 1. how do we troubleshoot the skipped events percentage issue, which presumably is causing the lack of episodes (though I can't seem to find documentation discussing how this all works). 2. Should we change our correlation search to not include normal severity events? Currently, the normal severity events are be produced so that we can change episodes to "info" when a "normal" event comes in. Welcome recommendations on a better practice than this! Thanks in advance.
Good morning, I woud like to test Splunk Phantom Community Edition in my home lab. When I try to install it following the documentation, the following error appears: About to proceed with Phantom... See more...
Good morning, I woud like to test Splunk Phantom Community Edition in my home lab. When I try to install it following the documentation, the following error appears: About to proceed with Phantom install Do you wish to proceed [y/N] y sed: can't read /opt/phantom/bin/stop_phantom.sh: No such file or directory Enter username: admin Enter password: ************ Loaded plugins: product-id, search-disabled-repos, subscription-manager Cleaning repos: alternatives-phantom phantom-apps phantom-base phantom-product : rhel-7-server-extras-rpms rhel-7-server-optional-rpms : rhel-7-server-rh-common-rpms rhel-7-server-rpms : rhel-7-server-supplementary-rpms rhel-server-rhscl-7-rpms Updating phantom repo package Error updating Phantom Repo package https://***@repo.phantom.us/phantom/4.8/product/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 401 - Unauthorized Trying other mirror. One of the configured repositories failed (Phantom product package), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Run the command with the repository temporarily disabled yum --disablerepo=phantom-product ... 4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable phantom-product or subscription-manager repos --disable=phantom-product 5. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=phantom-product.skip_if_unavailable=true failure: repodata/repomd.xml from phantom-product: [Errno 256] No more mirrors to try. https://***@repo.phantom.us/phantom/4.8/product/x86_64/repodata/repomd.xml: [Errno 14] HTTPS Error 401 - Unauthorized Is it not possible to install Splunk Phantom from RPMs packages? Is it only available via OVA for Community Edition? Many thanks for your help.
I seem to be running into the 10K limit when I export via email, while manually downloading the data works just fine. Will specifying a max of 15K fix that, or is it the reporting package still has t... See more...
I seem to be running into the 10K limit when I export via email, while manually downloading the data works just fine. Will specifying a max of 15K fix that, or is it the reporting package still has these limitations? Thanks in advance.
Hi team, I m trying to find network traffic of a user and classify it as high or normal based on avg and stdev calculations QUERY : index="pan_logs" sourcetype="pan:traffic" user!=unknown | ... See more...
Hi team, I m trying to find network traffic of a user and classify it as high or normal based on avg and stdev calculations QUERY : index="pan_logs" sourcetype="pan:traffic" user!=unknown | stats sum(bytes) as bytes by _time,user |eval MB = round(bytes/1024/1024,4)|bin span=1d _time | stats avg(MB) as avg stdev(MB) as stdv by user,_time|eval avg = round(avg,4) , stdv = round(stdv,4) | eval Volume_Type= if((avg+2*stdv)> MB , "HIGH" , "NORMAL However the avg and stdev calculation is wrong here as it collects per day basis and not when i keep it for last 7 days.
Hello, We use a python script to export some data every 24 hours from our database and save it in $SPLUNK_HOME/etc/apps/SplunkEnterpriseSecuritySuite/lookups folder in .csv format. For some ... See more...
Hello, We use a python script to export some data every 24 hours from our database and save it in $SPLUNK_HOME/etc/apps/SplunkEnterpriseSecuritySuite/lookups folder in .csv format. For some reason Splunk can't recognise ; as a delimiter, so we have a lookup with a single field like below: name;ip;os;environment wks123;192.168.0.1;windows 10;production srv456;192.168.0.2;widows 2016;test etc. At the same time when we create a new lookup file based on the same .csv file via Lookup Editor add-on it works perfectly fine. Could you please help as to set up a delimiter for our original .csv file in Splunk configuration? Thanks.
We need to log all data traffic from SOAP interfaces with large requests/responses, which sometimes contain included Base64 encoded documents. The log events are up to 20 MB. It that possible with... See more...
We need to log all data traffic from SOAP interfaces with large requests/responses, which sometimes contain included Base64 encoded documents. The log events are up to 20 MB. It that possible without performance impacts or should we filter the messages before they are forwarded to the indexer and send them to another storage (f.i. S3)? We need all this events in Splunk, but it would be sufficient to have references to the encoded documents. Does anyone have experience with forwarding/logging huge events? Our Splunk volume license wouldn't be a problem, we have a contract of about 100 GB daily für Splunk AWS. Regards, Falk Berger
In splunk >> apps. I need to build a query with order by clause which is working fine. but my requirement is to build a query with ORDER By Desc