All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  We recently upgraded the Splunk environment to 9.2.4, and we have some of the apps that are using the Python 2.7 version we are in the process of upgrading those apps in the next 3 months. I ... See more...
Hi,  We recently upgraded the Splunk environment to 9.2.4, and we have some of the apps that are using the Python 2.7 version we are in the process of upgrading those apps in the next 3 months. I noticed that there are errors related to " splunk/bin/jp.py present_but_shouldnt_be, /splunk/bin/python2.7 present_but_shouldnt_be" Since we also use python 2.7, we do not want to delete these files in the bin. I want to understand there is any way that we can suppress these messages during the integrity check. 
Hello, team I've made script, which uses the sudo command. I've deployed it on my forwarders and I get the error: message from "/opt/splunkforwarder/etc/apps/app/bin/script.sh" sudo: effective uid ... See more...
Hello, team I've made script, which uses the sudo command. I've deployed it on my forwarders and I get the error: message from "/opt/splunkforwarder/etc/apps/app/bin/script.sh" sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges? Please help to fix this issue. 
Hi! I recently wanted to test sending traces using the signalfx splunk-otel-collector. In general everything works as expected, however when sending spans containing links to other spans, these link... See more...
Hi! I recently wanted to test sending traces using the signalfx splunk-otel-collector. In general everything works as expected, however when sending spans containing links to other spans, these links don't show up in the waterfall UI, even though they should be working according to the documentation . When downloading the trace data, span links are not mentioned at all. The (debug) logs of the splunk-otel-collector don't seem to mention any errors/abnormalities either. Following example shows my test span. It should be linking to two other spans, but it doesn't show up as such. Additionally I tested using Jaeger All-in-one, and in there the span links properly show up.   I am thankful for any hints you can provide that might help me debug this problem    
we have 100+ use cases onboarded into splunk ES. also we are receiving the alerts few of them but i want to know exact count how many use cases onboarded into the splunk in that how many triggered th... See more...
we have 100+ use cases onboarded into splunk ES. also we are receiving the alerts few of them but i want to know exact count how many use cases onboarded into the splunk in that how many triggered the alerts? much appreciated any one guide. 
Hi all, I have the following issue. I have a table A  col1 col2 A aa B bb C aa   And a table B colA colB aa FYI bb LOL   I need to add to table A the column c... See more...
Hi all, I have the following issue. I have a table A  col1 col2 A aa B bb C aa   And a table B colA colB aa FYI bb LOL   I need to add to table A the column colB based on the matching values from col1 (table A) and colA (table B) and it should look like: col1 colB col2 A FYI aa B LOL bb C FYI aa   so basically map the values from col2 to colA and add colB based on the matches Thanks for your support,
Hi Trying to display some data with single value (and sparkline) with the single value viz. I want to break the result up over "countries", but nothing comes out. I get a sparkline, but the rest a... See more...
Hi Trying to display some data with single value (and sparkline) with the single value viz. I want to break the result up over "countries", but nothing comes out. I get a sparkline, but the rest are all zero's | gentimes start=01/01/2024 end=01/31/2024 increment=1d | eval _time=starttime | eval countries="Sweden,Finland,Estonia,Lithuania,Norway,Latvia" | makemv delim="," countries | mvexpand countries | eval value=round(random() % 100, 0) | streamstats count | sort countries | timechart span=1d avg(value) by countries Dashboard 
We're sending AWS ELB Access logs (Classic ELB, NLB and ALB) using Lambda to HEC.  I have installed the Splunk add-on for AWS on SH and HEC . The add-on has regexes to parse the access logs and all t... See more...
We're sending AWS ELB Access logs (Classic ELB, NLB and ALB) using Lambda to HEC.  I have installed the Splunk add-on for AWS on SH and HEC . The add-on has regexes to parse the access logs and all the fields extractions from REGEX for access logs seems to be working fine. However, we're having issues with the timestamp of the event, which is also extracted as "timestamp" field and the _time is getting assigned as ingestion time instead of actual time from the event.  I tried to add timestamp PREFIX in the props.conf in Splunk_TA_AWS for the aws:elb:access logs sourcetype, however, it doesn't work.     Sample events -  NLB -  tls 2.0 2025-01-15T23:59:54 net/loadbalancerName/guid 10.xxx.xxx.1:32582 10.xxx.x.xx:443 1140251 85 3546 571 - arn:aws:acm:us-west-2:026921344628:certificate/guid - ECDHE-RSA-XXXX-GCMXXX tlsv12 - example.io - - - 2025-01-15T23:40:54 ALB -  https 2018-07-02T22:23:00.186641Z app/my-loadbalancer/50dc6c495c0c9188 192.168.131.39:2817 10.0.0.1:80 0.086 0.048 0.037 200 200 0 57 "GET https://www.example.com:443/ HTTP/1.1" "curl/7.46.0" ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 arn:aws:elasticloadbalancing:us-east-2:123456789012:targetgroup/my-targets/73e2d6bc24d8a067 "Root=1-58337281-1d84f3d73c47ec4e58577259" "www.example.com" "arn:aws:acm:us-east-2:123456789012:certificate/12345678-1234-1234-1234-123456789012" 1 2018-07-02T22:22:48.364000Z "authenticate,forward" "-" "-" "10.0.0.1:80" "200" "-" "-" TID_123456 ELB -  2018-12-31T00:08:01.715269Z loadbalancerName 187.xx.xx.xx:48364 - -1 -1 -1 503 0 0 0 "GET http://52.x.xxx.xxx:80/ HTTP/1.1" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36" - -   props.conf   ## Classic Load Balancer ## [source::http:lblogs] EXTRACT-elb = ^\s*(?P<timestamp>\S+)(\s+(?P<elb>\S+))(\s+(?P<client_ip>[\d.]+):(?P<client_port>\d+))(\s+(?P<backend>\S+))(\s+(?P<request_processing_time>\S+))(\s+(?P<backend_processing_time>\S+))(\s+(?P<response_processing_time>\S+))(\s+(?P<elb_status_code>\S+))(\s+(?P<backend_status_code>\S+))(\s+(?P<received_bytes>\d+))(\s+(?P<sent_bytes>\d+))(\s+"(?P<request>[^"]+)")(\s+"(?P<user_agent>[^"]+)")(\s+(?P<ssl_cipher>\S+))(\s+(?P<ssl_protocol>\S+)) EVAL-rtt = request_processing_time + backend_processing_time + response_processing_time sourcetype = aws:elb:accesslogs ## Application Load Balancer ## [source::http:aws-lblogs] EXTRACT-elb = ^\s*(?P<type>\S+)(\s+(?P<timestamp>\S+))(\s+(?P<elb>\S+))(\s+(?P<client_ip>[\d.]+):(?P<client_port>\d+))(\s+(?P<target>\S+))(\s+(?P<request_processing_time>\S+))(\s+(?P<target_processing_time>\S+))(\s+(?P<response_processing_time>\S+))(\s+(?P<elb_status_code>\S+))(\s+(?P<target_status_code>\S+))(\s+(?P<received_bytes>\d+))(\s+(?P<sent_bytes>\d+))(\s+"(?P<request>[^"]+)")(\s+"(?P<user_agent>[^"]+)")(\s+(?P<ssl_cipher>\S+))(\s+(?P<ssl_protocol>\S+))(\s+(?P<target_group_arn>\S+))(\s+"(?P<trace_id>[^"]+)")(\s+"(?P<domain_name>[^"]+)")?(\s+"(?P<chosen_cert_arn>[^"]+)")?(\s+(?P<matched_rule_priority>\S+))?(\s+(?P<request_creation_time>\S+))?(\s+"(?P<actions_executed>[^"]+)")?(\s+"(?P<redirect_url>[^"]+)")?(\s+"(?P<error_reason>[^"]+)")? EVAL-rtt = request_processing_time + target_processing_time + response_processing_time priority = 1 sourcetype = aws:elb:accesslogs ## Network Load Balancer ## [source::http:lblogs] EXTRACT-elb-nlb = ^\s*(?P<type>\S+)(\s+(?P<log_version>\S+))(\s+(?P<timestamp>\S+))(\s+(?P<elb>\S+))(\s+(?P<listener>\S+))(\s+(?P<client_ip>[\d.]+):(?P<client_port>\d+))(\s+(?P<destination_ip>[\d.]+):(?P<destination_port>\d+))(\s+(?P<connection_time>\S+))(\s+(?P<tls_handshake_time>\S+))(\s+(?P<received_bytes>\d+))(\s+(?P<sent_bytes>\d+))(\s+(?P<incoming_tls_alert>\S+))(\s+(?P<chosen_cert_arn>\S+))(\s+(?P<chosen_cert_serial>\S+))(\s+(?P<tls_cipher>\S+))(\s+(?P<tls_protocol_version>\S+))(\s+(?P<tls_named_group>\S+))(\s+(?P<domain_name>\S+))(\s+(?P<alpn_fe_protocol>\S+))(\s+(?P<alpn_be_protocol>\S+))(\s+(?P<alpn_client_preference_list>\S+)) sourcetype = aws:elb:accesslogs [aws:elb:accesslogs] TIME_PREFIX = ^.*?(?=20\d\d-\d\d) TIME_FORMAT = MAX_TIME_LOOKAHEAD      
I have configured a OAUTH client ID and secret on my client's ServiceNow instance. I configured the account in the Splunk Add-on for ServiceNow application. The configuration completed without issue.... See more...
I have configured a OAUTH client ID and secret on my client's ServiceNow instance. I configured the account in the Splunk Add-on for ServiceNow application. The configuration completed without issue. I was then able to configure an input to pull from the CMDB database using the OAUTH credentials. However when I try to pull the "sn_si_incident" table from the SIR database I'm getting the message "Insufficient rights to query records: Fields present in the query do not have permission to be read". When I configured the OAUTH credentials in the add-on I used an account (e.g. svc_account1) that I know has permissions to read from this table. We have also tested with Postman and can pull from the security incident table. In Postman we configured the client ID/secret as well as the username and password (using svc_account1).  We noticed that when we try using the OAUTH using Postman the user is the correct user (svc_account1). However when we use the Splunk add-on the user is my user account. Has anyone every tried to use OAUTH to access the security database tables? Is the add-on built to handle the security database tables? I wonder about this because when I try to select a table from the dropdown I don't see "sn_si_incident" (probably because the only tables available are from the CMDB database). Thanks.
Hi Splunkers,  This is my first post as I am new to using splunk, but my issue arising when I am trying to pull specific values from a time range within one search. To do this I am using appendcols ... See more...
Hi Splunkers,  This is my first post as I am new to using splunk, but my issue arising when I am trying to pull specific values from a time range within one search. To do this I am using appendcols to add another search, and designate a new value for earliest and latest, then use the "stats latest (field) as 'name' by field, field" command to pull these values out. Here is an example query: index="index" <search> earliest=-4h@h latest=@h |stats latest(FieldA) as DataNew earliest(FieldA) as DataOld by Field1, Field2, Field 3 |appendcols [search index="index" <search> earliest=-3h@h latest=-1@h |stats latest(FieldA) as DataMidOld earliest(FieldA) as DataMidNew by Field1, Field2, Field3] |table DataNew,DataMidNew, DataMidOld, DataOld, Field1, Field2, Field3 In my mind, I see no error with this search, but the values for DataMidOld and DataMidNew do not align with the actual data, and are seemingly random. Any help is appreciated!
The following is my query. index="xyz"  host="*"  |fields host,messagevalue | search "total payment count :" |eval messagevalue=mvindex(split(messagevalue,":"),1) |stats latest(messagevalue) ... See more...
The following is my query. index="xyz"  host="*"  |fields host,messagevalue | search "total payment count :" |eval messagevalue=mvindex(split(messagevalue,":"),1) |stats latest(messagevalue) For a given period if there are no events, No results found is displayed. Instead I want zeo to be displayed. I tried using fillnull but no luck.
What does currentDBsizeMB actually represent?   Seeing some discrepancies in the actual file system consumption between our indexers and cold storage volumes (which are nvme over IP mounts on differ... See more...
What does currentDBsizeMB actually represent?   Seeing some discrepancies in the actual file system consumption between our indexers and cold storage volumes (which are nvme over IP mounts on different servers) Does currentDBsizeMB include just hot? hot/warm? or hot/warm/cold?  Does it include replica copies or just one and you have to multiply the value by your replication factor to get the "true" size of the index on disk? I have been unable to find a definitive answer on this, appreciate anyone in advance that can help shed some light on this field.  
Hello Splunk Community,  I am very new to Splunk and was given the following task and could really use some help: To gather all data that Splunk is collecting and put it in a visually readable form... See more...
Hello Splunk Community,  I am very new to Splunk and was given the following task and could really use some help: To gather all data that Splunk is collecting and put it in a visually readable format for executives I have been trying very many things to accomplish this, such as, using Enterprise Security > Audit> Index Audit and Forwarder Audit. Trying to create custom classic dashboards and using Dashboard studio to play around with the data. Nothing seems to give me what I need.  I have also tried the following:  | tstats  values(source) as sources ,values(sourcetype) as sourcetype where index=* by host | lookup dnslookup clienthost as host OUTPUT  clientip as src_ip This method is very resource intensive and provides me with the information I need but the Source and Sourcetypes are incredibly long and make the table not easy to read for executives. Is there another way to do this? 
Hello Splunkers, I was wondering if it's possible to combine adaptive and static thresholds in IT Service Intelligence for the same KPI. As an example, let's consider the percentage of swap memory ... See more...
Hello Splunkers, I was wondering if it's possible to combine adaptive and static thresholds in IT Service Intelligence for the same KPI. As an example, let's consider the percentage of swap memory used by a host. If I apply static thresholds, I know there's an issue only when the last detected value exceeds a fixed number (we can call this "the old style monitoring"  ). On the other hand, if I use ITSI adaptive thresholding, the boundary will adapt itself using historical data. This solution would be great, but let's imagine that the swap memory used by the system slowly but continuously grows over days and weeks. At a certain point, it will reach 100%, but the KPI state will say "normal" because that value is, in some way, aligned with previous ones. Is there a way to use the adaptive thresholding behavior while keeping the "emergency" values fixed? Thanks in advance. Happy Splunking!
After the upgrade of Splunk core to release 9.4.0,  if I want to bind LDAP group name to role inside splunk (I have about 200 role), splunk show me only 30 role. I tried to bypass this bug/issue set... See more...
After the upgrade of Splunk core to release 9.4.0,  if I want to bind LDAP group name to role inside splunk (I have about 200 role), splunk show me only 30 role. I tried to bypass this bug/issue setting it via conf file and then restarting the splunk service but this is boring. Do you encountered this issue? How to resolve that?   NOTE: Environment Search Head Cluster Splunk Enterprise rel. 9.4.0    
I have a search that searches 2 different indexes. We expect that there is 1 record from each index for a single id. The search is pretty simple:   index=index1 | rename Number__c as EventId | app... See more...
I have a search that searches 2 different indexes. We expect that there is 1 record from each index for a single id. The search is pretty simple:   index=index1 | rename Number__c as EventId | append [search index=index2 sourcetype="api" ] | stats count by EventId | search count < 2   What i would like to do now is evaluate that there is a single record from each index for each EventId, to ensure that the count of 2 isn't 2 records in a single index. There are times where, in index2, a single EventId has more than one record which makes the count inaccurate because it's not evaluating whether there was a record for it in index1.  
Hi,  We have installed track-me in our splunk cloud for log and host monitoring. I have setup alerts for few sourcetype tracking if no logs reports to splunk for an hour. Now, what I want to under... See more...
Hi,  We have installed track-me in our splunk cloud for log and host monitoring. I have setup alerts for few sourcetype tracking if no logs reports to splunk for an hour. Now, what I want to understand is, if an alert has been triggered and the issue has been taken care, how do we acknowledge the alert. I am unfamiliar with the UI of the trackme. My version is: 2.1.7 The one I have make in circle is no of alerts which has triggered. If lets say the issue is fixed for one of the sourcetype. But the number is still showing as 4. Could some one please explain.
Hello,  I have lookup table which contain fields as below.   user                       shortname email 1                     name1 email 2                     name2 I wanted to search s... See more...
Hello,  I have lookup table which contain fields as below.   user                       shortname email 1                     name1 email 2                     name2 I wanted to search specific index and find whether the users in the lookup table logged in to any app for past 1 month.   I am trying something like this and not getting exact match with users in the lookup table. Please help here.   | inputlookup users_list.csv |join user type=outer [|search index="my_index" sourcetype="my_sourcetype" | fields app action signinDateTime user shortname ] |table app action signinDateTime user shortname
Hi,  Yes, you read it right. One of my small lab is planning to migrate their Splunk deployment from RHEL to Windows. Their main reason is, they do not have a Linux admin.    I am going to help ... See more...
Hi,  Yes, you read it right. One of my small lab is planning to migrate their Splunk deployment from RHEL to Windows. Their main reason is, they do not have a Linux admin.    I am going to help them migrate but I am Linux admin and never done any migration from one platform to another. Has anyone done that? Any tips on how to go about doing it?
As per the subject, I'm attempting to convert a rex expression in my search into a proper field extraction using the Field Extractor so I can drop the rex and use the field in my base search directly... See more...
As per the subject, I'm attempting to convert a rex expression in my search into a proper field extraction using the Field Extractor so I can drop the rex and use the field in my base search directly. The rex expression works perfectly but requires the use of max_match=0 in order to get all the results. Unless I'm mistaken(which is very posible), I don't have that option available in the Field Extractor, and because of that, the regex only picks up one value instead of multiple. I've tested the regex on regex101, and it works fine there, grabbing all the values properly. It's just in the Field Extractor that it isn't grabbing stuff. The rex expression itself does use a specific field rather than just running on _raw, but the search does work when running on _raw(I verified) The rex expression is placed below followed by the regex itself.     rex field=AttachmentDetails max_match=0 "(?:'(?<attachments>.*?)'.*?'fileHash': '(?<sha256>\w+)'}.*?\{.*?\}\}[,\}]\s?)"         (?:'(?<attachments>.*?)'.*?'fileHash': '(?<sha256>\w+)'}.*?\{.*?\}\}[,\}]\s?)         Below, I've placed some test data you can use on regex101 to verify the expression above. It captures both fields on the site, but just not in Splunk, and I can't tell why. Perhaps I've misunderstood how grouping works in regex. orci eget eros faucibus tincidunt. Duis leo. Sed fringilla mauris sit amet nibh. Donec sodales sagittis magna. Sed consequat, leo eget bibendum sodales, augue velit cursus nunc, {'NotSecrets!!.txt': 'fileHash': 'a3b9adaee5b83973e8789edd7b04b95f25412c764c8ff29d0c63abf25b772646'}, {}}, 'Secrets!!.txt': 'fileHash': 'c092a4db704b9c6f61d6a221b8f0ea5f719e7f674f66fede01a522563687d24b'}, {}}} orci eget eros faucibus tincidunt. Duis leo. Sed fringilla mauris sit amet nibh. Donec sodales sagittis magna. Sed consequat, leo eget bibendum sodales, augue velit cursus nunc,
Hi there, I am confused about the configuration steps for getting data in from Salesforce. Adding a Salesforce account, I want to use OAuth, but I am only a Splunk Admin, the configured technical us... See more...
Hi there, I am confused about the configuration steps for getting data in from Salesforce. Adding a Salesforce account, I want to use OAuth, but I am only a Splunk Admin, the configured technical user to be used is managed by our Salesforce Admin. My understanding is, that either one of us needs to have the capabilities as an admin on both instances to make it work? What we tried: Configuration of the user on Salesforce-side from another account, which is an Salesforce admin. Configuration of the add-on on Splunk-side with my admin account. The redirect link has been added to Salesforce, I tried to setup the add-on on Splunk as explained in the documentation of add-on for salesforce, but an error occurs after trying to connect them. Another hindrance is the use of LDAP. To make it work, I would need to give Salesforce admin Splunk admin capabilities or the other way around, I would need to get Salesforce admin rights. But that is something we do not want as the capabilities should remain as it is: Splunk for Splunk, Salesforce for Salesforce. Is there any other way to make it work, with the use of a technical user? Or is it just not possible with OAuth?   Best regards