All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

This is similar to a question I asked earlier today that was quickly answered, however I'm not sure if I can apply that solution to this due to the transpose.  Not sure how to reference the data corr... See more...
This is similar to a question I asked earlier today that was quickly answered, however I'm not sure if I can apply that solution to this due to the transpose.  Not sure how to reference the data correctly for that. We have data with 10-15 fields in it and we are doing a transpose like the below.  What we are looking to accomplish is to display only the rows where the values are the same, or alternatively where they are different.   index=idx1 source="src1" | table field1 field2 field3 field4 field5 field6 field7 field8 field9 field10 | transpose header_field=field1 column    sys1    sys2 field2       a            b field3       10         10 field4       a           a field5       10         20 field6       c           c field7       20         20 field8       a           d field9      10         10 field10    20        10
Hello, We have two clustered Splunk platforms. Several sources are sent to both platforms (directly to clustered indexers) as index app-idx1, then on 2nd platform we use different target index name... See more...
Hello, We have two clustered Splunk platforms. Several sources are sent to both platforms (directly to clustered indexers) as index app-idx1, then on 2nd platform we use different target index name using props.conf/transforms.conf to have application_idx2 For unknown reason few sources are failing to lastchanceindex.   props.conf [source::/path/to/app_json.log] TRANSFORMS-app-idx1 = set_idx1_index transforms.conf [set_idx1_index] SOURCE_KEY = _MetaData:Index REGEX = app-idx1 DEST_KEY = _MetaData:Index FORMAT = application_idx2   Thanks for your help.    
Hello, We identify a fails request by gathering data from 3 different logs. I need to group by userSesnId, and if these specific logs appear in my list, it defines a certain failure. I would like to... See more...
Hello, We identify a fails request by gathering data from 3 different logs. I need to group by userSesnId, and if these specific logs appear in my list, it defines a certain failure. I would like to count each failure by using these logs. I would greatly appreciate your help with write this search query.  I hope this makes sense.. Thank you, I would like to use the information from these logs, grouped by userSesnId Log #1:  msgDtlTxt: [Qxxx] - the quote is declined.  msgTxt: quote creation failed. polNbr: Qxxx Log #2 httpStatusCd: 400 Log #3 msgTxt: Request.   They all share the same userSesnId  userSesnId: 10e30ad92e844d So my results should look something like this: polNbr            msgDtlTxt                         msgTxt                              httpStatusCd             count Qxxx                Validation: UWing           quote creation failed     400                             1
  If I execute the below query for selected time  like 20 hours  its taking longer time and calling events are 2,72,000 .How to simplify this query for getting the result in 15 to 20 seconds.   in... See more...
  If I execute the below query for selected time  like 20 hours  its taking longer time and calling events are 2,72,000 .How to simplify this query for getting the result in 15 to 20 seconds.   index=asvservices authenticateByRedirectFinish (*) | join request_correlation_id [ search index= asvservices stepup_validate ("isMatchFound\\\":true") | spath "policy_metadata_policy_name" | search "policy_metadata_policy_name" = stepup_validate | fields "request_correlation_id" ] | spath "metadata_endpoint_service_name" | spath "protocol_response_detail" | search "metadata_endpoint_service_name"=authenticateByRedirectFinish | rename "protocol_response_detail" as response      
Few servers are hosting in private VPC which are not connected to organisation IT network    how can we onboard those Linux hosts 
Hello,   I obtain a  "Failed processing http input" when trying to collect the following json event with indexed fields : {"index" : "test",  "sourcetype", "test", "event":"This is a test", "field... See more...
Hello,   I obtain a  "Failed processing http input" when trying to collect the following json event with indexed fields : {"index" : "test",  "sourcetype", "test", "event":"This is a test", "fields" : { "name" : "test" , "values" : {}  }} Error is : "Error in handling indexed fields" Could anyone precise reason of the error ? "fields value" could not be empty ? I can't prevent it on the source. Best regards, David  
After looking at some examples online, I was able to come up with the below query, which can display one or more columns of data based on the selection of "sysname". What we would like to do with th... See more...
After looking at some examples online, I was able to come up with the below query, which can display one or more columns of data based on the selection of "sysname". What we would like to do with this is optionally just two sysnames, and only the rows where the values do not match. index=idx1 source="file1.log" sysname IN ("SYS1","SYS6")| table sysname value_name value_info | eval {sysname}=value_info | fields - sysname, value_info | stats values(*) as * by value_name   The data format is the below and there are a couple hundred value_names for each sysname with varying formats from integer values, to long strings sysname, value_name, value_info   The above query displays the data something like this value_name            SYS1                     SYS6 name1                       X                             Y name2                       A                             A name3                       B                             C
Good day, I am trying to figure out how I can join two searches to see if there is a service now ticket open for someone leaving the company and if that person is still signing into some of our pl... See more...
Good day, I am trying to figure out how I can join two searches to see if there is a service now ticket open for someone leaving the company and if that person is still signing into some of our platforms. This is to get the signin details into the platform - as users might have multiple email addresses I want them all.   index=collect_identities sourcetype=ldap:query [ search index=db_mimecast splunkAccountCode=* mcType=auditLog |fields user | dedup user | eval email=user, extensionAttribute10=user, extensionAttribute11=user | fields email extensionAttribute10 extensionAttribute11 | format "(" "(" "OR" ")" "OR" ")" ] | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | table email extensionAttribute10 extensionAttribute11 first last identity | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute11) AS extensionAttribute11 values(first) AS first values(last) AS last BY identity   This is to check all leavers in service now   index=db_service_now sourcetype="snow:incident" affect_dest="STL Leaver" | dedup description | table _time affect_dest active description dv_state number   Unfortunately the Supporthub does not add the email in the description and only user names and surnames. So I would need to search the first queries 'first' 'last' against the second query to find leavers. this is what I tried but it does not work.   index=collect_identities sourcetype=ldap:query [ search index=db_mimecast splunkAccountCode=* mcType=auditLog |fields user | dedup user | eval email=user, extensionAttribute10=user, extensionAttribute11=user | fields email extensionAttribute10 extensionAttribute11 | format "(" "(" "OR" ")" "OR" ")" ] [search index=db_service_now sourcetype="snow:incident" affect_dest="STL Leaver" | dedup description | rex field=description "*(?<first>\S+) (?<last>\S+)*" | fields first last] | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute11) AS extensionAttribute11 values(first) AS first values(last) AS last BY identity   Search one results identity email extensionattribute10 extensionattribute11 first last nsurname name.surname@domain.com nsurnameT1@domain.com name.surname@consultant.com name surname Search two will get all my tickets that was created for people leaving my company and will return results like this _time affect_dest active description dv_state number 2024-10-31 09:46:55 STL Leaver true Leaver Request for Name Surname - 31/10/2024 active INC01 So the only way of searching would by to search the second query's description field where first and last appear Expectations identity email extensionattribute10 extensionattribute11 first last _time affect_dest active description dv_state number nsurname name.surname@domain.com nsurnameT1@domain.com name.surname@consultant.com name surname 2024-10-31 09:46:55 STL Leaver true Leaver Request for Name Surname - 31/10/2024 active INC01 jdoe john.doe@domain.com jdoeT1@domain.com jdoe@worker.com john doe 2024-11-11 12:46:55 STL Leaver true John Doe Offboarding on  - 31/12/2024 active INC02
Can you please help me to build eval query Condition-1 ABC=Match XYZ=Match then output of ABC compare to XYZ is Match Condition-2 ABC=Match XYZ=NO_Match then output of ABC compare... See more...
Can you please help me to build eval query Condition-1 ABC=Match XYZ=Match then output of ABC compare to XYZ is Match Condition-2 ABC=Match XYZ=NO_Match then output of ABC compare to XYZ is No_Match Condition-3 ABC=NO_Match XYZ=Match then output of ABC compare to XYZ is No_Match Condition-4 ABC=NO_Match XYZ=NO_Match then output of ABC compare to XYZ is No_Match
Hello I have a DBConnect query that gets data from a database and then send it to a Splunk index. Below are the query and also how it looks in Splunk. The data is being indexed as key=value pair wit... See more...
Hello I have a DBConnect query that gets data from a database and then send it to a Splunk index. Below are the query and also how it looks in Splunk. The data is being indexed as key=value pair with dobbel quotes around "value". I have plenty of other data that is not using DBConnect and they dont have dobbel quotes around value.   Maybe the quotes is there because im using DBConnect? Is it possible to index data from DBConnect without adding the quotes? When i try to searc the data in Splunk i just dont get any data. I think it may have to do with the dobbel quotes? I'm not sure. Here are the search string. The air_temp is defined in the Climate datamodel. The TA(air temperature) in the data is defined in props.conf with the right sourcetype TU_CLM_Time. | tstats avg(Climate.air_temp) as air_temp from datamodel="Climate" where sourcetype="TU_CLM_Time" host=TU_CLM_1 by host _time span=60m ```Fetching relevant fields from CLM sourcetype in CLM datamodel.```      
Hey Splunk team, I’m facing an issue where Splunk fails to search for certain key-value pairs in some events unless I use wildcards (*) in the value. Here's an example to illustrate the problem: ... See more...
Hey Splunk team, I’m facing an issue where Splunk fails to search for certain key-value pairs in some events unless I use wildcards (*) in the value. Here's an example to illustrate the problem: { "xxxx_ID": "78901", "ERROR": "Apples mangos lemons. Banana blackberry blackcurrant blueberry.", "yyyy_NUM": "123456", "PROCESS": "orange", "timestamp": "yyyy-mm-ddThh:mm:ss" } Query Examples: This works (using wildcards): index="idx_xxxx" *apples mangos lemons* These don’t work: -> index="idx_xxxx"  ERROR="Apples mangos lemons. Banana blackberry blackcurrant blueberry." -> index="idx_xxxx"  ERROR=*apples mangos lemons* -> The query below, using regex, does not include all error values trying to find any value after the ERROR key:  index="idx_xxxx" | rex field=_raw "ERROR:\s*(?<error_detail>.+?)(?=;|$)" | table error_detail Observations: Non-Latin characters are not the issue because of other events, for example, Greek text in the ERROR field is searchable without wildcards. This behavior is inconsistent: some events allow exact matches, but others don’t. Questions: Could this issue stem from inconsistencies in the field extraction process? Are there common pitfalls or misconfigurations during indexing or source-type assignments that might cause such behavior? How can I debug and verify that the keys and values are properly extracted/indexed? Any help would be greatly appreciated! Thank you!  ‌‌
I have an index in which data is coming DB_connect , but it showing NO EVENTS as it is showing this error "Invalid database connection"  and Everything is fine from database side.
Hi Splunk Community, I’ve set up Azure Firewall logging, selecting all firewall logs and archiving them to a storage account (Event Hub was avoided due to cost concerns). The configuration steps tak... See more...
Hi Splunk Community, I’ve set up Azure Firewall logging, selecting all firewall logs and archiving them to a storage account (Event Hub was avoided due to cost concerns). The configuration steps taken are as follows: Log Archival: All Azure Firewall logs are set to archive in a storage account Microsoft Cloud Add-On I added the storage account to the Microsoft Cloud Add-On using the secret key with the following permissions: Input/Action API Permissions Role (IAM) Default Sourcetype(s) / Sources Azure Storage Table Azure Storage Blob N/A Access key  OR Shared Access Signature:   - Allowed services: Blob, Table   - Allowed resource types: Service, Container, Object   - Allowed permissions: Read, List N/A mscs:storage:blob (Received this) mscs:storage:blob:json mscs:storage:blob:xml mscs:storage:table We are receiving events from the source files in JSON format, but there are two issues: Field Extraction: Critical fields such as protocol, action, source, destination, etc., are not being identified. Incomplete Logs: Logs appear truncated, starting with partial data (e.g., “urceID:…”) and missing “Reso,” which implies dropped or incomplete events (As far as I understand) Few logs were received compared to the traffic on Azure Firewall. Attached is a piece of logs showing errors as mentioned in the question. ________________________________________________________________ Environment Details:  • Log Collector: Heavy Forwarder (HF) hosted in Azure. • Data Flow: Logs are being forwarded to Splunk Cloud    Questions: Can it be an issue with using storage accounts and not event-hub? Could the incomplete logs be due to a configuration issue with the Microsoft Cloud Add-On or possibly related to the data transfer between the storage account and Splunk? Has anyone encountered similar issues with field extraction from Azure Firewall JSON logs? Ultimate Goal: Receive Azure Firewall Logs with fields extracted as any other firewall logs received by Syslog (Fortinet for example) Any guidance or troubleshooting suggestions would be much appreciated!  
I would like to seek advice from experienced professionals. I want to add another heavy forwarder to my environment as a backup in case the primary one fails (on a different network and not necessari... See more...
I would like to seek advice from experienced professionals. I want to add another heavy forwarder to my environment as a backup in case the primary one fails (on a different network and not necessarily active-active).  * I have splunk cloud and 1 Heavy Forwarder, 1  Deployment server on premise. 1. If I copy a heavy forwarder (VM) from one vCenter to another, change the IP, and generate new credentials from Splunk Cloud, will it work immediately? (I want to preserve my existing configurations.) 2. I have a deployment server. Can I use it to configure two heavy forwarders? If so, what would be the implications? (Would there be data duplication, or is there a way to prioritize data? Or is there a better way I should do this? Please advise.
We are experiencing issues configuring RADIUS authentication within Splunk. Despite following all required steps and configurations, authentication via RADIUS is not working as expected, and users ar... See more...
We are experiencing issues configuring RADIUS authentication within Splunk. Despite following all required steps and configurations, authentication via RADIUS is not working as expected, and users are unable to authenticate through the RADIUS server. - Installed radius client on splunk machine and configure the radiusclient.conf file with radius server data - Updated the authentication.conf file located in $SPLUNK_HOME/etc/system/local/, as well as updates to web.confto support RADIUS authentication requests in Splunk Web. - Used the radtest tool to validate the connection between the Splunk RADIUS client - Monitored the Splunk authentication logs in $SPLUNK_HOME/var/log/splunk/splunkd.log to identify any errors, and consistently encountered the following error: Could not find [externalTwoFactorAuthSettings] in authentication stanza. - Integrated radiusScripted.py to assist with RADIUS authentication, configuring it to work with the authentication settings. It appears that Splunk is unable to successfully authenticate with the RADIUS server, with repeated errors indicating missing configuration stanzas or settings that are not recognized. Environment Details: Splunk Version: 9.1.5 Authentication Configuration Files: authentication.conf, web.conf Additional Scripts: radiusScripted.py Please advise on troubleshooting steps or configuration adjustments needed to resolve this issue. Any insights or documentation on RADIUS integration best practices with Splunk would be highly appreciated. thanks   
Upgraded the Machine Agent from 22 to v24.8.0.4467, after the changes, seeing below errors, [DBAgent-7] 06 Nov 2024 17:24:28,994 ERROR ADBMonitorConfigResolver - [XXXXXXXXXXXX] Failed to resolve DB ... See more...
Upgraded the Machine Agent from 22 to v24.8.0.4467, after the changes, seeing below errors, [DBAgent-7] 06 Nov 2024 17:24:28,994 ERROR ADBMonitorConfigResolver - [XXXXXXXXXXXX] Failed to resolve DB topological structure. java.sql.SQLException: ORA-01005: null password given; logon denied I have provided the userid and password in the configuration -> controller setting, Any one faced this kind of issue ?
Hi All  I would like to add reset button in the dashboard however i am not able to see the option to add in dashboard studio.   Thanks
How can I get a list of all libraries included in this app? I will need that to get this through our security review.
Hello, Splunk doesn't display extra spaces on variables that I assigned. Please see below example I used Google Chrome and Microsoft Edge, it gave me same results.  If I exported the CSV, the data ... See more...
Hello, Splunk doesn't display extra spaces on variables that I assigned. Please see below example I used Google Chrome and Microsoft Edge, it gave me same results.  If I exported the CSV, the data have correct number of spaces. Please suggest. Thank you   | makeresults | fields - _time | eval One Space = "One space Test" | eval Two Spaces = "Two spaces Test" | eval Three Spaces = "Three spaces Test"        
I have an index with events containing a src_ip but not a username for the event.   I have another index of VPN auth logs that has the assigned IP and username.  But the VPN IPs are randomly assigned... See more...
I have an index with events containing a src_ip but not a username for the event.   I have another index of VPN auth logs that has the assigned IP and username.  But the VPN IPs are randomly assigned. I need to get the username from the VPN logs where vpn.client_ip matches event.src_ip.  But I need to make sure that the returned username is the one that was assigned during the event.  In short, I need to get the last vpn client_ip assignment to match the event.src_ip BEFORE the event so the vpn.username would be the correct one for event.src_ip. Here's a generic representation of my current query but I get nothing back. index=event ... | join left=event right=vpn where event.src_ip=vpn.client_ip max=1 usetime=true earlier=true [search index=vpn]