All Topics

Top

All Topics

I want to get the date when the Splunk admin credential got changed, is there any way to get it?
Howto to explode 1 row to several breaking out a multi-value field. app=ABC client=AA views=View1,View2 app=ABC client=AA views=View1,View2,View3 app=ABC client=BB views=View1,View3 app=ABC clien... See more...
Howto to explode 1 row to several breaking out a multi-value field. app=ABC client=AA views=View1,View2 app=ABC client=AA views=View1,View2,View3 app=ABC client=BB views=View1,View3 app=ABC client=CC views=View3,View2,View1 I want to table that to column data: app client view ABC AA View1 ABC AA View2 ABC AA View1 ABC AA View2 ABC AA View3 ABC BB View1 ABC BB View3 ABC CC View3 ABC CC View2 ABC CC View1 So that I can run count on that resultant rows app client view count ABC AA View1 2 ABC AA View2 2 ABC AA View3 1 ABC BB View1 1 ABC BB View3 1 ABC CC View3 1 ABC CC View2 1 ABC CC View1 1
We migrated our Splunk indexer from Ubuntu to RHEL recently. Everything appeared to go fine except for this one add-on. Initially, we were getting a different error. I ran fapolicyd-cli add file splu... See more...
We migrated our Splunk indexer from Ubuntu to RHEL recently. Everything appeared to go fine except for this one add-on. Initially, we were getting a different error. I ran fapolicyd-cli add file splunk to it and that error cleared but now we get this error.  External search command "ldapgroup" returned error code 1. Script output = "error message=HTTPError at "/opt/splunk/etc/apps/SA-ldapsearch/bin/packages/splunklib/binding.py", line 1245 : HTTP 403 Forbidden - insufficient permission to access this resources." I went in and did chown -R on the folder (and every other folder in the line including /opt/splunk) but that didn't fix it. The files and folders are all owned by splunk and have permission to run it. I have verified the firewall ports for 636 and 389 are open. We have tried to reinstall the add-on through the web interface and get a series of similar errors indicating that it can't copy a number of .py files over. Some do get copied though and most of the folders created. I'm at a bit of a loss...  
Hello,   I've seen many others in this forum trying to achieve something similar to what I'm trying to do but I didn't find an answer that completely satisfied me. This is the Use Case: I want to... See more...
Hello,   I've seen many others in this forum trying to achieve something similar to what I'm trying to do but I didn't find an answer that completely satisfied me. This is the Use Case: I want to compare the number of requests received by our Web Proxy with the same period in the last week. Then I want to filter out any increase lower than X percent.   This is how I've tried to implement it using the timewrap and it's pretty close to what I want to achieve. Only problem is that the timewrap command only seems to work fine if I only group by _time.     | tstats `summariesonly` count as event_count from datamodel="Web" where sourcetype="f5:bigip:ltm:http:irule" by _time span=10m | timewrap 1w | where _time >= relative_time(now(), "-60m") | where (event_count_latest_week - event_count_1week_before) > 0 | where (((event_count_latest_week - event_count_1week_before)/event_count_latest_week)*100) >= 40     This gives me a result like this. _time event_count_1week_before_week event_count_latest_week XXXX YYYY ZZZZ     If I try to do something similar but grouping by the name of the web site that it's being accesed in the tstats command then timewrap command doesn't work for me anymore. It outputs just the latest values of 1 of the web sites.       | tstats `summariesonly` count as event_count from datamodel="Web" where sourcetype="f5:bigip:ltm:http:irule" by _time Web.site span=10m | timewrap 1w | where _time >= relative_time(now(), "-60m") | where (event_count_latest_week - event_count_1week_before) > 0 | where (((event_count_latest_week - event_count_1week_before)/event_count_latest_week)*100) >= 40       That doesn't work. Do you know why that happens and how can I achieve what I want?     Many thanks.   Kind regards.  
Hi, Do anyone have idea how we can disable or hide the OpenAPI.json which is visible in the UI of Splunk Add-on which exposes the schema of the inputs ? I also deleted the OpenAPI.json file fro... See more...
Hi, Do anyone have idea how we can disable or hide the OpenAPI.json which is visible in the UI of Splunk Add-on which exposes the schema of the inputs ? I also deleted the OpenAPI.json file from appserver->static directory which helped in not downloading the file but that button is still present which I want to hide. It is referenced in some .js files which is difficult to read. Any idea how to hide the below button from the configuration tab of the UI  
Hi All, We previously used the Splunk Add-on for Microsoft Office 365 Reporting Web Service (https://splunkbase.splunk.com/app/3720) to collect message trace logs in Splunk.   However, since this ... See more...
Hi All, We previously used the Splunk Add-on for Microsoft Office 365 Reporting Web Service (https://splunkbase.splunk.com/app/3720) to collect message trace logs in Splunk.   However, since this add-on has been archived, what is the recommended alternative for collecting message trace logs now?
We have an app created with splunk add-on builder. We got an alert about the new python SDK.. check_python_sdk_version If your app relies on the Splunk SDK for Python, we require you to use an a... See more...
We have an app created with splunk add-on builder. We got an alert about the new python SDK.. check_python_sdk_version If your app relies on the Splunk SDK for Python, we require you to use an acceptably-recent version in order to avoid compatibility issues between your app and the Splunk Platform or the Python language runtime used to execute your app’s code. Please update your Splunk SDK for Python version to the least 2.0.2. More information is available on this project’s GitHub page: https://github.com/splunk/splunk-sdk-python   How do we upgrade the SDK in the add-on builder to use the latest?
<input id="select_abc" type="multiselect" token="token_abc" searchWhenChanged="true"> <label>ABC&#8205;</label> <default>*</default> <prefix>(</prefix> <suffix>)</suffix> <value... See more...
<input id="select_abc" type="multiselect" token="token_abc" searchWhenChanged="true"> <label>ABC&#8205;</label> <default>*</default> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <choice value="*">All</choice> <search base="base_search"> <query> | stats count as count by some_field | sort 0 - count </query> </search> <fieldForLabel>some_field</fieldForLabel> <fieldForValue>some_field</fieldForValue> <delimiter>,</delimiter> <change> <condition label="All"> <set token="token_abc">("*") AND some_field != "SomeArbitraryStringValue"</set> </condition> </change> </input> I was wondering how I can exclude a specific option from the asterisk (*) value of the "All" option? Also, how does it work with parantheses and also exlcuding it from the default value? Thank you
Hey! I am currently standing up an enterprise splunk system that has a multi-site(2) indexer cluster of 8 Peers and 2 Cluster Managers in HA configuration (LB by F5). I've noticed that if we have ou... See more...
Hey! I am currently standing up an enterprise splunk system that has a multi-site(2) indexer cluster of 8 Peers and 2 Cluster Managers in HA configuration (LB by F5). I've noticed that if we have outages specific to a site, data rightfully continues to get ingested at the site that is still up..... But upon the return of service to the secondary site, we have a thousand or more fixup tasks (normal I suppose) but at times they hang and eventually I get replication failures in my health check. Usually unstable pending-down-up status is associated with the peers from the site that went down as they attempt to clean up. This is still developmental, so I have the luxury of deleting things with no consequence. The only fix I have seen to work is deleting all the data from the peers that went down and allowing them to resync and copy from a clean slate. I'm sure there is a better way to remedy this issue.    Can anyone explain/or point me in the direction of the appropriate solution and what the exact cause of this problem is?  I've read this Anomalous bucket issues - Splunk Documentation but roll, resync, delete doesn't quite do enough. And there is no mention as to why the failures start to occur. From my understanding, fragmented buckets play a factor when reboots or unexpected outages happen but how do I exactly regain some stability in my data replication.
Hi! Maybe this question is so simple to answer that I did not find any example, so please be kind to me We use append in our correlation search to see if we do have a server in blackout. Unfortu... See more...
Hi! Maybe this question is so simple to answer that I did not find any example, so please be kind to me We use append in our correlation search to see if we do have a server in blackout. Unfortunately we have seen the append returning just partial results which makes an incoming event create an Episode and Incident. It does happen very seldom but imagine you set a server into blackout for a week and your run the correlation search every minute. Just one issue with the indexer layer, i.e. timeout creates a risk of the event passing through. Our idea now is to have a saved search to feed a lookup instead. This search can then even run at a lower frequency, maybe every 5 minutes. But what if that search is seeing partial results and updates the lookup with partial data. So, long story short, how can one detect in a running search that it deals with partial results down the pipe? Could this work, example for peer timeout?   |index=... |eval sid="$name$" |search NOT [|search index=_internal earliest=-5m latest=now() sourcetype=splunk_search_messages message_key="DISPATCHCOMM:PEER_ERROR_TIMEOUT" log_level=ERROR | fields sid] |outputlookup ...   Any help is appreciated.
I have a timechart that traffic volume over time and the top 15% of API performance times. I would like to add URI_Stem to the timechart so that I can track performance over time for each of my API c... See more...
I have a timechart that traffic volume over time and the top 15% of API performance times. I would like to add URI_Stem to the timechart so that I can track performance over time for each of my API calls. Not sure how that can be done. | timechart span=1h count(_raw) as "Traffic Volume", perc85(time_taken) as "85% Longest Time Taken" Example of table by URI_Stem | stats count avg(time_taken) as Average BY cs_uri_stem | eval Average = round(Average, 2) | table cs_uri_stem count Average | sort -Average | rename Average as "Average Response Time in ms"
Hello,   A client went mad on how many saved searches they require and wont get rid of them. Due to this it is hammering rw on the indexers to the point the indexers can cope and remove themselves ... See more...
Hello,   A client went mad on how many saved searches they require and wont get rid of them. Due to this it is hammering rw on the indexers to the point the indexers can cope and remove themselves from the cluster and then re adds which and more resource strain. adding more indexers isnt an option The current setup is 3vm multisite search head cluster and a 4vm multisite indexer cluster.   As they only require 3rf and 3sf i am wondering if there is a way to use only 1SH and 1 Indexer for all saved searches to run so that the load doesnt affect the other 3 indexers?
Hello all,  I have a requirement to list all of our assets and show the last time they appeared in the logs of many different tools.  I wanted to use KV store for this.  We would run a search agains... See more...
Hello all,  I have a requirement to list all of our assets and show the last time they appeared in the logs of many different tools.  I wanted to use KV store for this.  We would run a search against each tool's logs and then update it's "last seen" time in the KV store for the particular asset. I've attempted this a few ways, but I can't see to get it going.  I have the KV Store built with one column of last_seen times for one tool. But I am lost on how to update last_seen times for other tools for existing entries in the KV Store. Any guidance would be appreciated.  Thank you!
Hi, We were using Splunk Enterprise (8.2.5) and ESS (7.2.0) on Debian 12. Everything was working fine until I upgraded Splunk to 9.2.2 (first to 9.1.0 and then to 9.2.2). Next day morning when I che... See more...
Hi, We were using Splunk Enterprise (8.2.5) and ESS (7.2.0) on Debian 12. Everything was working fine until I upgraded Splunk to 9.2.2 (first to 9.1.0 and then to 9.2.2). Next day morning when I checked Security Posture and then Incident Review, I found out that no notable has been created in ESS. Checked the scheduler and correlation searches were run successfully. Also, I tried creating an ad-hoc notable but though Splunk messaged me it was created successfully, I had nothing in my incident review dashboard. Everything else (log ingestion, search, regex...) are working fine. I've been checking the logs for the past few hours but I still have not found anything regarding this issue. I also tried redeploying 7.2.0 but no luck. Any ideas?
Hi all, We have an index say index1 with a log retention of 7 days where we receive logs for different applications. Now we have a requirement to copy all ERROR logs from an application to another i... See more...
Hi all, We have an index say index1 with a log retention of 7 days where we receive logs for different applications. Now we have a requirement to copy all ERROR logs from an application to another index, say index2 continuously.  The intention here is to have index2 with a higher retention so we can have access to the error logs for longer period.   What is the best way to implement such a mechanism. Its okay to run this job every day, or every 6 hour or so. Would be best to retain all fields and field extractions in logs in target index as well.  
 Hi Community! i have a (kind of ) special problem with my data routing. Topology: We have 2 different Clusters, one for ES and one for Splunk Enterprise. Each clusters consist of minimum 1 S... See more...
 Hi Community! i have a (kind of ) special problem with my data routing. Topology: We have 2 different Clusters, one for ES and one for Splunk Enterprise. Each clusters consist of minimum 1 Search head 4 Indexer peers (Multisite Cluster). All hosted on RedHat Virtual Machines. Usecase: On all Linux systems (including Splunk itself) are some sources defined for ES and some sources for normal Splunk Enterprise indexes. E.g.: /var/log/secure - ES (Index: linux_security) /var/log/audit/audit.log - ES (Index: linux_security) /var/log/dnf.log - Splunk Enterprise (Index: linux_server) /var/log/bali/rebootreq.log - Splunk Enterprise (Index: linux_server) Problem: The Routing of those logs from the collecting tier (Universal Forwarder, Heavy Forwarder) is fine, because those components have both clusters as output groups defined including props / transforms config.  On Search heads there are only the search peers defined as output group (ES Search head --> ES Indexer Cluster, Splunk Enterprise Search head --> Splunk Enterprise Cluster). This is due to several summary searches and inputs from the Search head, im not able to adjust the routing like we do on the Heavy Forwarder because of the frequent changes made my powerusers. That is working fine so far except for the sources that require to be sent to the opposite cluster. Same for the logs directly on the Indexer Tier, the defined logs requires to get sent to the other cluster. So simplified: The log /var/log/secure on Splunk Enterprise Cluster Search head / Indexer needs to be sent to ES Cluster Indexer. The log /var/log/dnf.log on the ES Cluster Search head / Indexer needs to be sent to the Splunk Enterprise Indexer. What i have done already: Configured both Indexer Clusters to sent data to each other based on the specific index in outputs.conf. With this the events are now available in the correct cluster, but are also available as duplicates in their source cluster. I try to get rid of the source events! Splunk Enterprise Indexer outputs.conf: [indexAndForward] index = true [tcpout] ... forwardedindex.0.whitelist = .* forwardedindex.1.blacklist = _.* forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker|_dsclient|_dsphonehome|_dsappevent) forwardedindex.3.blacklist = .* forwardedindex.4.whitelist = linux_secure forwardedindex.5.blacklist = _.* forwardedindex.filter.disable = false useACK = false useClientSSLCompression = true useSSL = true [tcpout:es_cluster] server = LINUXSPLIXPRD50.roseninspection.net:9993, LINUXSPLIXPRD51.roseninspection.net:9993, LINUXSPLIXPRD52.roseninspection.net:9993,LINUXSPLIXPRD53.roseninspection.net:9993 ES Indexer outputs.conf: [indexAndForward] index = true [tcpout] ... forwardedindex.0.whitelist = .* forwardedindex.1.blacklist = _.* forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker|_dsclient|_dsphonehome|_dsappevent) forwardedindex.3.blacklist = .* forwardedindex.4.whitelist = linux_server forwardedindex.5.blacklist = _.* forwardedindex.filter.disable = false useACK = false useClientSSLCompression = true useSSL = true [tcpout:rosen_cluster] server = LINUXSPLIXPRD01.roseninspection.net:9993, LINUXSPLIXPRD02.roseninspection.net:9993, LINUXSPLIXPRD03.roseninspection.net:9993,LINUXSPLIXPRD04.roseninspection.net:9993 Additionally i tried to setup props.conf / transforms.conf like we do on HF to catch at least events from Search head and send them to the correct _TCP_ROUTING queue but without any success. I guess because they got parsed already on the Search head. Splunk Enterprise props.conf: [linux_secure] ... SHOULD_LINEMERGE = False TIME_FORMAT = %b %d %H:%M:%S TRANSFORMS = TRANSFORMS-routingLinuxSecure = default_es_cluster Splunk Enterprise transforms.conf: [default_es_cluster] ... DEST_KEY = _TCP_ROUTING FORMAT = es_cluster REGEX = . SOURCE_KEY = _raw ES props.conf: [rhel_dnf_log] ... SHOULD_LINEMERGE = True TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%Q TRANSFORMS-routingLinuxDNF = default_rosen_cluster ES transforms.conf: [default_rosen_cluster] ... DEST_KEY = _TCP_ROUTING FORMAT = rosen_cluster REGEX = . SOURCE_KEY = _raw Example: Source: /var/log/dnf.log _time _raw host source index splunk_server count 2024-09-10 12:07:21 2024-09-10T12:07:21+0000 DDEBUG timer: config: 3 ms linuxsplixprd51.roseninspection.net (Indexer ES) /var/log/dnf.log last_chance linux_server linuxsplixprd01.roseninspection.net linuxsplixprd51.roseninspection.net 2 2024-09-11 12:24:31 2024-09-11T10:24:31+0000 DDEBUG timer: config: 4 ms linuxsplixprd01.roseninspection.net (Indexer Splunk Enterprise) /var/log/dnf.log linux_server linuxsplixprd01.roseninspection.net 1 2024-09-10 13:15:04 2024-09-10T11:15:04+0000 DDEBUG timer: config: 3 ms linuxsplshprd50.roseninspection.net (Search head ES) /var/log/dnf.log last_chance linux_server linuxsplixprd01.roseninspection.net linuxsplixprd50.roseninspection.net 2 2024-09-10 13:22:53 2024-09-10T11:22:53+0000 DDEBUG Base command: makecache linuxsplshprd01.roseninspection.net  (Search head Splunk Enterprise) /var/log/dnf.log linux_server linuxsplixprd01.roseninspection.net 1 2024-09-11 11:55:51 2024-09-11T09:55:51+0000 DEBUG cachedir: /var/cache/dnf kuluxsplhfprd01.roseninspection.net (Heavy Forwarder) /var/log/dnf.log linux_server linuxsplixprd01.roseninspection.net 1 Any idea how i can achieve to get rid of those duplicate events at the source cluster (last_chance)?
Hi, Is there any Splunk Technology Add-on (TA) for Dell Unity storage? suggestions pls. I see Dell PowerMax and Dell PowerScale (earlier Dell Isilon) only in Splunkbase.
Dears, I'm getting an error after loading the adrum.js: Refused to frame https://cdn.appdynamics.com/ because it violates the following Content Security Policy directive: frame-src 'self' www.googl... See more...
Dears, I'm getting an error after loading the adrum.js: Refused to frame https://cdn.appdynamics.com/ because it violates the following Content Security Policy directive: frame-src 'self' www.google.comwww.google.com  The EUM is reachable.EUM-processor: version-'24.4.0.0', commit-cd:XXXXXXXXXXXXXXb, build-release/24.4.0.next #24.4.0-35342, timestamp=2024-05-02 01:18:33 The backend is Microsoft SharePoint. CSP has added both the CDN and EUM servers. Regards, Khalid
I'm trying to import a csv file generated by the NiFi GetSplunk component. It retrieves events from a Splunk Instance SPL-01 and store them in a CSV file with the following header: _serial,_time,so... See more...
I'm trying to import a csv file generated by the NiFi GetSplunk component. It retrieves events from a Splunk Instance SPL-01 and store them in a CSV file with the following header: _serial,_time,source,sourcetype,host,index,splunk_server,_raw I do an indexed_extraction=CSV when I import the csv files on another spunk instance SPL-02. If I just import the file, the host will be the instance SPL-02 and I want the host to be SPL-01 I got past this by having a transform as follows: [mysethost] INGEST_EVAL = host=$field:host$   Question 1: That gives me correct host name set to SPL-01, but I still have a EXTRACTED_HOST field, when I look at events in Splunk.. I found the article below where I got the idea to use $field:host$, but it also has ":=" for assignment, that did not work for me, so I used the "=" and then it worked. I also tried setting the "$field:host$=null()" but that had no effect.. I found this article https://community.splunk.com/t5/Getting-Data-In/How-to-get-the-host-value-from-INDEXED-EXTRACTIONS-json/m-p/577392   Question 2: I have problem getting the data from time field in. I tried using the TIMESTAMP_FIELDS in props.conf for this import. I tried the following. TIMESTAMP_FIELDS=_time (Did not work) TIMESTAMP_FIELDS=$field:_time$ ( Did not work) I then renamed the header line so time was named: "xtime" instead and then I could use the props.conf and set the TIMESTAMP_FIELDS=xtime How can I use the _time field directly?  
Hello Members,   i have problems between the peers and managing node (CM), I tried to identify the issue but i canno't find a possible way to fix it because i didn't notice any problems regarding t... See more...
Hello Members,   i have problems between the peers and managing node (CM), I tried to identify the issue but i canno't find a possible way to fix it because i didn't notice any problems regarding the connectivity    see the pic below