All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

When will the Microsoft Fabric Add-on for Splunk be available for Splunk Cloud?
Hi i have a list of servers coming from two different sources list A has server without domain names and list B has servers with and without domain names i was trying to compare the two list and ge... See more...
Hi i have a list of servers coming from two different sources list A has server without domain names and list B has servers with and without domain names i was trying to compare the two list and get matching and not matching value  problem is bot the list have same server but because of domain name it says not matching  i understand if function is probably not the correct choice and when i use case with like it give me error any suggestions on this   |makeresults | eval listA="xyz1apac" ,listB="xyz1apac.ent.bhpbilliton.net" | append [| makeresults | eval listA="xyz2" ,listB="xyz2.ent.bhpbilliton.net"] | append [| makeresults | eval listA="xyz3emea" ,listB="xyz3emea"] | append [| makeresults | eval listA="xyz4abc" ,listB="xyz4abc.ent.bhpbilliton.net"] | fields - _time | eval matching=if(listA != listB, "NOT OK", "OK")   thanks
Hi all, I have installed 2 add-ons, Splunk DB Connect and Splunk Add-on For Oracle Database. According to the DOCS of Splunk, Splunk DB Connect will have Input templates from Import from Add-on For ... See more...
Hi all, I have installed 2 add-ons, Splunk DB Connect and Splunk Add-on For Oracle Database. According to the DOCS of Splunk, Splunk DB Connect will have Input templates from Import from Add-on For Oracle Database. However, it only has the default template of DB Connect, I have any other way such as manual configuration to import them. Thanks. 
Hello, I have 2 columns, one with date and other with the day of week based on day of week whenever is Saturday or Sunday, I want to change the time to 9 am How can I do this? submitted daywe... See more...
Hello, I have 2 columns, one with date and other with the day of week based on day of week whenever is Saturday or Sunday, I want to change the time to 9 am How can I do this? submitted dayweek result 13/03/2025 14:24 Thursday 13/03/2025 14:24 12/03/2025 09:31 Wednesday 12/03/2025 09:31 11/03/2025 13:45 Tuesday 11/03/2025 13:45 10/03/2025 18:11 Monday 10/03/2025 18:11 09/03/2025 11:21 Sunday 09/03/2025 09:00 08/03/2025 21:55 Saturday 08/03/2025 09:00 07/03/2025 10:24 Friday 07/03/2025 10:24
So I have in the past used a report which finds a string and then calculates the size left and it came as 1 whole event so was simple. Now it is coming as 2 events - how do I perform this on the 2 e... See more...
So I have in the past used a report which finds a string and then calculates the size left and it came as 1 whole event so was simple. Now it is coming as 2 events - how do I perform this on the 2 events   1st event  - replies with totalCapacity=12323455667 2nd event - replies with usedCapacity=233445 I need to take away the used from the total and report - and this was possible before as it came as just 1 event and I did an eval CapLeft = totalCapacity - usedCapacity and it worked because everything was in the same event. 1 event contained totalCapacity and userCapacity in the same output
Hello, I would like to know if it possible to define the retention period for each type of log (Hot/Warm/Cold). For example, setting the total frozenTimePeriodInSecs to 3 years while specifying a 1 ... See more...
Hello, I would like to know if it possible to define the retention period for each type of log (Hot/Warm/Cold). For example, setting the total frozenTimePeriodInSecs to 3 years while specifying a 1 year retention period for each stage (Hot,Warm and Cold). Could you please clarify this?
We have a discrepancy of 30 to 40 seconds between the event timestamp and _time. I have tries changing the config on props.conf without any luck. Our setup is such that the search head is on cloud wh... See more...
We have a discrepancy of 30 to 40 seconds between the event timestamp and _time. I have tries changing the config on props.conf without any luck. Our setup is such that the search head is on cloud while all the forwarders are on premise. The events are collected using psutil on linux servers and sent to the IF through the HEC. The props.conf is as follows:  [infra:script:uptime] SHOULD_LINEMERGE = false KV_MODE = json INDEXED_EXTRACTIONS=JSON TIMESTAMP_FIELDS=timestamp TIME_PREFIX = "timestamp":\s TIME_FORMAT = %s.%6N MAX_TIMESTAMP_LOOKAHEAD = 100 DATETIME_CONFIG = NONE TRUNCATE = 0 TZ=Africa/Gaborone   btool produces the following output: [splunkusr@uatbwsif001v bin]$ ./splunk cmd btool props list "infra:script:uptime" --debug /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf [infra:script:uptime] /opt/splunk/etc/system/default/props.conf ADD_EXTRA_TIME_FIELDS = True /opt/splunk/etc/system/default/props.conf ANNOTATE_PUNCT = True /opt/splunk/etc/system/default/props.conf AUTO_KV_JSON = true /opt/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE = /opt/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE_DATE = True /opt/splunk/etc/system/default/props.conf CHARSET = UTF-8 /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf DATETIME_CONFIG = NONE /opt/splunk/etc/system/default/props.conf DEPTH_LIMIT = 1000 /opt/splunk/etc/system/default/props.conf DETERMINE_TIMESTAMP_DATE_WITH_SYSTEM_TIME = false /opt/splunk/etc/system/default/props.conf HEADER_MODE = /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf INDEXED_EXTRACTIONS = JSON /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf KV_MODE = json /opt/splunk/etc/system/default/props.conf LB_CHUNK_BREAKER_TRUNCATE = 2000000 /opt/splunk/etc/system/default/props.conf LEARN_MODEL = true /opt/splunk/etc/system/default/props.conf LEARN_SOURCETYPE = true /opt/splunk/etc/system/default/props.conf LINE_BREAKER_LOOKBEHIND = 100 /opt/splunk/etc/system/default/props.conf MATCH_LIMIT = 100000 /opt/splunk/etc/system/default/props.conf MAX_DAYS_AGO = 2000 /opt/splunk/etc/system/default/props.conf MAX_DAYS_HENCE = 2 /opt/splunk/etc/system/default/props.conf MAX_DIFF_SECS_AGO = 3600 /opt/splunk/etc/system/default/props.conf MAX_DIFF_SECS_HENCE = 604800 /opt/splunk/etc/system/default/props.conf MAX_EVENTS = 256 /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf MAX_TIMESTAMP_LOOKAHEAD = 100 /opt/splunk/etc/system/default/props.conf MUST_BREAK_AFTER = /opt/splunk/etc/system/default/props.conf MUST_NOT_BREAK_AFTER = /opt/splunk/etc/system/default/props.conf MUST_NOT_BREAK_BEFORE = /opt/splunk/etc/system/default/props.conf SEGMENTATION = indexing /opt/splunk/etc/system/default/props.conf SEGMENTATION-all = full /opt/splunk/etc/system/default/props.conf SEGMENTATION-inner = inner /opt/splunk/etc/system/default/props.conf SEGMENTATION-outer = outer /opt/splunk/etc/system/default/props.conf SEGMENTATION-raw = none /opt/splunk/etc/system/default/props.conf SEGMENTATION-standard = standard /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf SHOULD_LINEMERGE = false /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf TIMESTAMP_FIELDS = timestamp /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf TIME_FORMAT = %s.%6N /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf TIME_PREFIX = "timestamp":\s /opt/splunk/etc/system/default/props.conf TRANSFORMS = /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf TRUNCATE = 0 /opt/splunk/etc/apps/stanbic_uat_if_parsing_config/local/props.conf TZ = Africa/Gaborone /opt/splunk/etc/system/default/props.conf detect_trailing_nulls = false /opt/splunk/etc/system/default/props.conf maxDist = 100 /opt/splunk/etc/system/default/props.conf priority = /opt/splunk/etc/system/default/props.conf sourcetype = /opt/splunk/etc/system/default/props.conf termFrequencyWeightedDist = false /opt/splunk/etc/system/default/props.conf unarchive_cmd_start_mode = shell Below is a sample raw event on Splunk cloud: {"hostname": "uatbwmca02v.bw.sbicdirectory.com", "timestamp": 1741857668.0344827, "uptime_days": 183, "uptime_hours": 20, "uptime_minutes": 2, "uptime_total_seconds": 15883370} I have attached a screenshot of the following search: index=uat_uptime | eval correct_time=strptime(timestamp, "%s.%6N") | convert ctime(correct_time) ctime(timestamp) | table _time, correct_time, timestamp | sort -_time From the results, it is clear that there is a difference of 30-40 seconds between _time and timestamp field on the event. Another anomaly is that _time is behind the timestamp. I need help forcing _time to be set to the value of timestamp on the event.        
We get these messages. For exmaple dbconnect doesn't work anymore... how could i solve this? 03-11-2025 12:09:07.792 +0100 WARN  MongoClient [1244 KVStoreUpgradeStartupThread] - Disabling TLS ... See more...
We get these messages. For exmaple dbconnect doesn't work anymore... how could i solve this? 03-11-2025 12:09:07.792 +0100 WARN  MongoClient [1244 KVStoreUpgradeStartupThread] - Disabling TLS hostname validation for localhost 03-11-2025 12:09:07.843 +0100 INFO  KVStoreConfigurationProvider [1244 KVStoreUpgradeStartupThread] - KVSTore peer=127.0.0.1:8191 replication state=KV store captain. Health state=1 03-11-2025 12:09:07.843 +0100 INFO  MongoUpgradePreChecks [1244 KVStoreUpgradeStartupThread] - Supported Upgrade 3   03-11-2025 12:09:11.773 +0100 ERROR PersistentScript [2200 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.9.exe" "C:\Program Files\Splunk\Python-3.9\Lib\site-packages\splunk\persistconn\appserver.py"}:   File "C:\Program Files\Splunk\Python-3.9\lib\logging\handlers.py", line 115, in rotate 03-11-2025 12:09:11.773 +0100 ERROR PersistentScript [2200 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.9.exe" "C:\Program Files\Splunk\Python-3.9\Lib\site-packages\splunk\persistconn\appserver.py"}:     os.rename(source, dest)
Hello everyone, I have set up my Splunk server[with receiving port 9997 is enabled] and Splunk forwarder to monitor my UF logs.  Please suggest what i am missing here. but i am getting below when i... See more...
Hello everyone, I have set up my Splunk server[with receiving port 9997 is enabled] and Splunk forwarder to monitor my UF logs.  Please suggest what i am missing here. but i am getting below when i do - ./splunk list forward-server o/p: Active forwards: None Configured but inactive forwards: 52.66.100.58:9997 i have done below steps: my UF:  ./splunk add forward-server 52.66.100.58:9997 outputs.conf [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] disabled = false server = 52.66.100.58:9997 [tcpout-server://52.66.100.58:9997]   Thanks in advance.
Below is my search   | inputlookup uf_ssl_kv_lookup | search hostname=AB100*TILL* hostname!=AB100*TILL100 hostname!=AB100*TILL101 hostname!=AB100*TILL102 hostname!=AB100*TILL150 hostname!=AB100*TI... See more...
Below is my search   | inputlookup uf_ssl_kv_lookup | search hostname=AB100*TILL* hostname!=AB100*TILL100 hostname!=AB100*TILL101 hostname!=AB100*TILL102 hostname!=AB100*TILL150 hostname!=AB100*TILL151   When I ran the above search I see below warning, how to avoid the warning.    The term 'hostname!=AB100*TILL100' contains a wildcard in the middle of a word or string. This might cause inconsistent results if the characters that the wildcard represents include punctuation   There are 100's of stores and 1000's of tills. How to modify my search? Note: I can't change the lookup table.   Example hostname=AB1001234TILL1 in hostname WE -- stands for type 100 -- Country Code 1234 - store number TILL1 -- Till number  
Hello @Splunkers, Can someone please help me on this ? Trying to use "lookup/ inputlookup" command in search. Use case: trying to extract some specific values from logs for given session IDs. But... See more...
Hello @Splunkers, Can someone please help me on this ? Trying to use "lookup/ inputlookup" command in search. Use case: trying to extract some specific values from logs for given session IDs. But there are more than 200K session IDs to check.  So I created a lookup table which includes 200K sessions and then used below query.   Problem: nothing is returning, but there should be values returned when I checked some session IDs manually. index=testing_car hostname=*prod* "/api/update" | rex field=_raw "CUSTOMER\":(?<FName>[^\,]+)" | rex field=_raw "Session\":\"(?<SID>[^\"]+)" | search [ | lookup Sessions.csv SID | fields SID] | table SID, FName P.S. SID field is available in Session.csv file. 
I'm planning to upgrade upgrade splunk environment now. 3 shcluster - 3 index cluster - 2 heavy forwarder - 1 master.   i want to upgrade HF without data loss but i have to stop the splunk server ... See more...
I'm planning to upgrade upgrade splunk environment now. 3 shcluster - 3 index cluster - 2 heavy forwarder - 1 master.   i want to upgrade HF without data loss but i have to stop the splunk server during upgrade.   is there any other way to upgrade HF without data loss??
Hi, I am having trouble getting replace to work correctly in Ingest Processor and have this example. In SPL I can run this search:     | makeresults | eval test = "AAABBBCCC" | eval text = "\\... See more...
Hi, I am having trouble getting replace to work correctly in Ingest Processor and have this example. In SPL I can run this search:     | makeresults | eval test = "AAABBBCCC" | eval text = "\\\"test\\\":\\\"" | eval output = replace(test, "BBB", text)     and I will get this output But if I run this in a Ingest Processor pipeline | eval test = "AAABBBCCC" | eval text = "\\\"test\\\":\\\"" | eval output = replace(test, "BBB", text) The result is:     Note the slashes before the doublequotes have gone. Why have they gone? How do I ensure they are retained by Ingest Processor. This is a simplified example of what I am trying to do but this is the core of the problem I am having. Thanks
All, Is there an API to export JMX config?  I see APIs for exporting dashboards, transaction detection rules, alerts, etc, but nothing for JMX. This is where I'm looking: https://docs.appdynamics.... See more...
All, Is there an API to export JMX config?  I see APIs for exporting dashboards, transaction detection rules, alerts, etc, but nothing for JMX. This is where I'm looking: https://docs.appdynamics.com/appd/24.x/latest/en/extend-splunk-appdynamics/splunk-appdynamics-apis/configuration-import-and-export-api   thanks  
Hello, I'm trying to join based on a common field using a similar query like below, however, the in the result i only get partial results from the right side, probably because the search volume ( i ... See more...
Hello, I'm trying to join based on a common field using a similar query like below, however, the in the result i only get partial results from the right side, probably because the search volume ( i guess), or may be my query is not right, can we do this without join or properly use join ? TIA index=provisioning_index sourcetype=PCF:log source_type=APP/PROC/WEB message_type=OUT cf_org_name=org1 cf_app_name=APP1 LOG_LEVEL="ERROR" service=service1 errorCd="DOC-MGMT*" |fields _time errorCd errorDetails stateCode letterId documentId |rex field=_raw "errorDetails=(?<errorDetails>.*?)\s*:" |join left=lerr right=rlkp type=left where lerr.documentId = rlkp.documentId max=0 [search index=provisioning_index sourcetype=PCF:log source_type=APP/PROC/WEB message_type=OUT cf_org_name=org1 cf_app_name=APP1 NOT letterId=null operation=generateInstantDocument |fields _time errorCd errorDetails stateCode letterId documentId] | table _time lerr.errorCd lerr.errorDetails rlkp.stateCode rlkp.letterId lerr.documentId
I am trying to change the color of a result based on its deviation from zero. the numbers can be both positive and negative. The range I am trying to implement is as follows, (-10) <- 0 -> 10 should... See more...
I am trying to change the color of a result based on its deviation from zero. the numbers can be both positive and negative. The range I am trying to implement is as follows, (-10) <- 0 -> 10 should be #ff0000 (-15) <- (-10) and 10->15 should be #ff8c00 <- (-15) and 15-> should be #ff0000 basically any result from 0 to plus/minus 10 should be green, anything between plus/minus 10 and plus/minus15 should be orange and anything past plus/minus 15 should be red.  Is this possible?  
I am not using an api key, just free tier. I get this error when used in search: External search command 'ipextrainfo' returned error code 1. Script output = "error_message=TypeError at "/opt/sp... See more...
I am not using an api key, just free tier. I get this error when used in search: External search command 'ipextrainfo' returned error code 1. Script output = "error_message=TypeError at "/opt/splunk/etc/apps/ip_extrainfo/bin/ipextrainfo.py", line 47 : 'Message' object is not subscriptable ".
Hello,   I am creating a dashboard with below searches to customize number of data points and time span displayed (using timechart) when different timerange is selected.   Search 1 - Chart Span: ... See more...
Hello,   I am creating a dashboard with below searches to customize number of data points and time span displayed (using timechart) when different timerange is selected.   Search 1 - Chart Span: based on, custom timerange token, this returns span period and top values to be used in Search 3, given below.       | makeresults | eval spantime=case($timerange|s$="| where calldate>=relative_time(now(),\"@mon\") AND calldate<relative_time(now(),\"@d\")","1d",$timerange|s$="| where calldate>=relative_time(now(),\"@d\") AND calldate<relative_time(now(),\"@m\")","1h",$timerange|s$="| where calldate>=relative_time(now(),\"-30d@d\") AND calldate<relative_time(now(),\"@d\")","1d",$timerange|s$="| where calldate>=relative_time(now(),\"-7d@d\") AND calldate<relative_time(now(),\"@d\")","1d",$timerange|s$="| where calldate>=relative_time(now(),\"-24h@h\") AND calldate<relative_time(now(),\"@h\")","1h",true(),"1d") | eval startOfMonth=relative_time(now(),"@mon") | eval noOfDays=round((now()-startOfMonth)/86400) | eval startOfDay=relative_time(now(),"@d") | eval noOfHours=round((now()-startOfDay)/3600-1) | eval topvalues=case($timerange|s$="| where calldate>=relative_time(now(),\"@mon\") AND calldate<relative_time(now(),\"@d\")",$noOfDays$,$timerange|s$="| where calldate>=relative_time(now(),\"@d\") AND calldate<relative_time(now(),\"@m\")","$noOfHours$",$timerange|s$="| where calldate>=relative_time(now(),\"-30d@d\") AND calldate<relative_time(now(),\"@d\")","30",$timerange|s$="| where calldate>=relative_time(now(),\"-7d@d\") AND calldate<relative_time(now(),\"@d\")","7",$timerange|s$="| where calldate>=relative_time(now(),\"-24h@h\") AND calldate<relative_time(now(),\"@h\")","24",true(),"1d")     Search 2 - Saved Search: this is a report returning below fields:       | table Date Duration "Handled by" Queue _time     Search 3 - Chart: using Search 2 as base search and search result token values from Search 1:       | timechart span=$Chart Span:result.spantime$ count as HourlyCalls | sort $Chart Span:result.topvalues$ -_time     now, when I load the dashboard, default timerange is calldate>=relative_time(now(),\"-30d@d\") AND calldate<relative_time(now(),\"@d\") , based on this I would expect Search 3 as,       | timechart span=1d count as HourlyCalls | sort 30 -_time     but it shows below error message:       Set token value to render visualization $noOfDays$ $noOfHours$ $spantime$ $topvalues$     can someone please suggest what is wrong here and how can I fix this?   Thank you.
When uploading an SVG image to Splunk Dashboard Studio the characters for German umlauts are not displaying correct svg file in browser : svg within the dashboard:   Can we enclude the ... See more...
When uploading an SVG image to Splunk Dashboard Studio the characters for German umlauts are not displaying correct svg file in browser : svg within the dashboard:   Can we enclude the UTF-8 Encoding within the source code ? <svg version="2.0" encoding="utf-8" width="300" height="200" xmlns="http://www.w3.org/2000/svg"> <rect width="100%" height="100%" fill="red" /> <circle cx="150" cy="100" r="80" fill="green" /> <text x="150" y="125" font-size="60" text-anchor="middle" fill="white"> vowels a, o and u to make ä, ö, and ü. schön (beautiful) and Vögel (birds, plural form) </text> </svg> { "type": "splunk.choropleth.svg", "options": { "svg": "splunk-enterprise-kvstore://67d185960ede0c052b05390c" }, "context": {}, "containerOptions": {}, "showProgressBar": false, "showLastUpdated": false }    
Hello Team,Could you please assist me with resolving the issue of not seeing logs in SH after applying a new license? Additionally, since the Splunk license expired 5 months ago, could you kindly adv... See more...
Hello Team,Could you please assist me with resolving the issue of not seeing logs in SH after applying a new license? Additionally, since the Splunk license expired 5 months ago, could you kindly advise on the steps to fix this?   Additional information, before I often use 120gb/day and now I use 20gb/day.