All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

After looking at some examples online, I was able to come up with the below query, which can display one or more columns of data based on the selection of "sysname". What we would like to do with th... See more...
After looking at some examples online, I was able to come up with the below query, which can display one or more columns of data based on the selection of "sysname". What we would like to do with this is optionally just two sysnames, and only the rows where the values do not match. index=idx1 source="file1.log" sysname IN ("SYS1","SYS6")| table sysname value_name value_info | eval {sysname}=value_info | fields - sysname, value_info | stats values(*) as * by value_name   The data format is the below and there are a couple hundred value_names for each sysname with varying formats from integer values, to long strings sysname, value_name, value_info   The above query displays the data something like this value_name            SYS1                     SYS6 name1                       X                             Y name2                       A                             A name3                       B                             C
Hi! I think you are on the right track with field extraction and it's behaviours.  The search that works, does so because the search "looks for any match of your fruit string in the _raw event", ... See more...
Hi! I think you are on the right track with field extraction and it's behaviours.  The search that works, does so because the search "looks for any match of your fruit string in the _raw event", whereas the ones you are struggling with look for a field value pair, which actually does not exist int the raw event. (there is no "ERROR="). Splunk would have to extract this to recognize it as a field.  I would start with, what is the sourcetype of this data? Does it have any JSON parsing happening at search time, index time or both. (HINT: kv_mode =json / props.conf / transforms.conf )  Easy way to start is..does the Splunk UI recognize this as properly formed JSON and show you it "pretty printed"? Do you see the JSON kv pairs extracted in "interesting fields"? If not then we would need to extract them to be able to reference the fields and their values. 
Good day, I am trying to figure out how I can join two searches to see if there is a service now ticket open for someone leaving the company and if that person is still signing into some of our pl... See more...
Good day, I am trying to figure out how I can join two searches to see if there is a service now ticket open for someone leaving the company and if that person is still signing into some of our platforms. This is to get the signin details into the platform - as users might have multiple email addresses I want them all.   index=collect_identities sourcetype=ldap:query [ search index=db_mimecast splunkAccountCode=* mcType=auditLog |fields user | dedup user | eval email=user, extensionAttribute10=user, extensionAttribute11=user | fields email extensionAttribute10 extensionAttribute11 | format "(" "(" "OR" ")" "OR" ")" ] | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | table email extensionAttribute10 extensionAttribute11 first last identity | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute11) AS extensionAttribute11 values(first) AS first values(last) AS last BY identity   This is to check all leavers in service now   index=db_service_now sourcetype="snow:incident" affect_dest="STL Leaver" | dedup description | table _time affect_dest active description dv_state number   Unfortunately the Supporthub does not add the email in the description and only user names and surnames. So I would need to search the first queries 'first' 'last' against the second query to find leavers. this is what I tried but it does not work.   index=collect_identities sourcetype=ldap:query [ search index=db_mimecast splunkAccountCode=* mcType=auditLog |fields user | dedup user | eval email=user, extensionAttribute10=user, extensionAttribute11=user | fields email extensionAttribute10 extensionAttribute11 | format "(" "(" "OR" ")" "OR" ")" ] [search index=db_service_now sourcetype="snow:incident" affect_dest="STL Leaver" | dedup description | rex field=description "*(?<first>\S+) (?<last>\S+)*" | fields first last] | dedup email | eval identity=replace(identity, "Adm0", "") | eval identity=replace(identity, "Adm", "") | eval identity=lower(identity) | stats values(email) AS email values(extensionAttribute10) AS extensionAttribute10 values(extensionAttribute11) AS extensionAttribute11 values(first) AS first values(last) AS last BY identity   Search one results identity email extensionattribute10 extensionattribute11 first last nsurname name.surname@domain.com nsurnameT1@domain.com name.surname@consultant.com name surname Search two will get all my tickets that was created for people leaving my company and will return results like this _time affect_dest active description dv_state number 2024-10-31 09:46:55 STL Leaver true Leaver Request for Name Surname - 31/10/2024 active INC01 So the only way of searching would by to search the second query's description field where first and last appear Expectations identity email extensionattribute10 extensionattribute11 first last _time affect_dest active description dv_state number nsurname name.surname@domain.com nsurnameT1@domain.com name.surname@consultant.com name surname 2024-10-31 09:46:55 STL Leaver true Leaver Request for Name Surname - 31/10/2024 active INC01 jdoe john.doe@domain.com jdoeT1@domain.com jdoe@worker.com john doe 2024-11-11 12:46:55 STL Leaver true John Doe Offboarding on  - 31/12/2024 active INC02
To install an app from a file using the GUI requires signing in to Splunk.  No other credentials are required. If you wish to install using the CLI, un-tar the app into /opt/splunk/etc/apps and re-s... See more...
To install an app from a file using the GUI requires signing in to Splunk.  No other credentials are required. If you wish to install using the CLI, un-tar the app into /opt/splunk/etc/apps and re-start the DS. Either way, you should then copy the created 100_splunkcloud directory to /opt/splunk/etc/deployment-apps. There are no commands to run to complete the installation. The DS does not use port 9997.
Hi @Siddharthnegi , this means that you have an issue on the account used for the connection. Probably you used a domain user, try using a local user on the DB. Ciao. Giuseppe
Thank you, its working
| eval output=if(ABC="NO_Match" OR XYZ="NO_Match", "NO_Match", "Match")
Hi @PickleRick  Actually in phase-1 we are upgrading RHEL version (6.10 to 9.x--- just same OS but we are upgrading the version) the RHEL upgrade we are planning in new separate server from the sc... See more...
Hi @PickleRick  Actually in phase-1 we are upgrading RHEL version (6.10 to 9.x--- just same OS but we are upgrading the version) the RHEL upgrade we are planning in new separate server from the scratch and after  RHEL upgrade,in phase-2 we have a splunk upgrade(8.2 to 9.x) . So in phase-1 we are installing  existed splunk version (8..2)  in new server and we need data migration from existed server to new server.Please could you help on this... Thank You in advance..
Here you will find the prerequisites, hardware requirements and supported operating systems. Universal forwarder deployment prerequisites - Splunk Documentation
Can you please help me to build eval query Condition-1 ABC=Match XYZ=Match then output of ABC compare to XYZ is Match Condition-2 ABC=Match XYZ=NO_Match then output of ABC compare... See more...
Can you please help me to build eval query Condition-1 ABC=Match XYZ=Match then output of ABC compare to XYZ is Match Condition-2 ABC=Match XYZ=NO_Match then output of ABC compare to XYZ is No_Match Condition-3 ABC=NO_Match XYZ=Match then output of ABC compare to XYZ is No_Match Condition-4 ABC=NO_Match XYZ=NO_Match then output of ABC compare to XYZ is No_Match
Hello I have a DBConnect query that gets data from a database and then send it to a Splunk index. Below are the query and also how it looks in Splunk. The data is being indexed as key=value pair wit... See more...
Hello I have a DBConnect query that gets data from a database and then send it to a Splunk index. Below are the query and also how it looks in Splunk. The data is being indexed as key=value pair with dobbel quotes around "value". I have plenty of other data that is not using DBConnect and they dont have dobbel quotes around value.   Maybe the quotes is there because im using DBConnect? Is it possible to index data from DBConnect without adding the quotes? When i try to searc the data in Splunk i just dont get any data. I think it may have to do with the dobbel quotes? I'm not sure. Here are the search string. The air_temp is defined in the Climate datamodel. The TA(air temperature) in the data is defined in props.conf with the right sourcetype TU_CLM_Time. | tstats avg(Climate.air_temp) as air_temp from datamodel="Climate" where sourcetype="TU_CLM_Time" host=TU_CLM_1 by host _time span=60m ```Fetching relevant fields from CLM sourcetype in CLM datamodel.```      
Hey Splunk team, I’m facing an issue where Splunk fails to search for certain key-value pairs in some events unless I use wildcards (*) in the value. Here's an example to illustrate the problem: ... See more...
Hey Splunk team, I’m facing an issue where Splunk fails to search for certain key-value pairs in some events unless I use wildcards (*) in the value. Here's an example to illustrate the problem: { "xxxx_ID": "78901", "ERROR": "Apples mangos lemons. Banana blackberry blackcurrant blueberry.", "yyyy_NUM": "123456", "PROCESS": "orange", "timestamp": "yyyy-mm-ddThh:mm:ss" } Query Examples: This works (using wildcards): index="idx_xxxx" *apples mangos lemons* These don’t work: -> index="idx_xxxx"  ERROR="Apples mangos lemons. Banana blackberry blackcurrant blueberry." -> index="idx_xxxx"  ERROR=*apples mangos lemons* -> The query below, using regex, does not include all error values trying to find any value after the ERROR key:  index="idx_xxxx" | rex field=_raw "ERROR:\s*(?<error_detail>.+?)(?=;|$)" | table error_detail Observations: Non-Latin characters are not the issue because of other events, for example, Greek text in the ERROR field is searchable without wildcards. This behavior is inconsistent: some events allow exact matches, but others don’t. Questions: Could this issue stem from inconsistencies in the field extraction process? Are there common pitfalls or misconfigurations during indexing or source-type assignments that might cause such behavior? How can I debug and verify that the keys and values are properly extracted/indexed? Any help would be greatly appreciated! Thank you!  ‌‌
Login failed. The login is from an untrusted domain and cannot be used with Integrated authentication. this error is also been showed when i try to save.
Hi @Siddharthnegi , Can you execute queries in DB-Connect? if yes, see how you configured Splunk options (index, sourcetype, source, and host). If not, you have to check the Conncection inserting ... See more...
Hi @Siddharthnegi , Can you execute queries in DB-Connect? if yes, see how you configured Splunk options (index, sourcetype, source, and host). If not, you have to check the Conncection inserting the informations for the connection. Ciao. Giuseppe
I have an index in which data is coming DB_connect , but it showing NO EVENTS as it is showing this error "Invalid database connection"  and Everything is fine from database side.
@richgalloway , Thank you for your inputs. When installing the `splunkclouduf` app via the GUI, will it prompt for a username and password during installation, or will it proceed directly without r... See more...
@richgalloway , Thank you for your inputs. When installing the `splunkclouduf` app via the GUI, will it prompt for a username and password during installation, or will it proceed directly without requiring authentication? Since we haven’t previously installed the `splunkclouduf` app through the GUI, I’m curious to know what to expect. If installing by logging into the server directly, where should we place the `splunkclouduf` app—either in the `/opt/splunk/etc/apps` or `/opt/splunk/etc/deployment-apps` directory? After placing it in the appropriate directory, I assume we need to navigate to `/opt/splunk/bin` and execute the necessary command to complete the installation. Please confirm. Also, regarding ports, we know that 8000, 8089, and 9997 need to be open from our on-prem server. If there are any additional ports required, please let me know.  
No. I don't mean searching for the logs from the forwarder. This you won't find, it's obvious. You need to look into _internal log for events from your receiving indexer(s) or HF(s) depending on wha... See more...
No. I don't mean searching for the logs from the forwarder. This you won't find, it's obvious. You need to look into _internal log for events from your receiving indexer(s) or HF(s) depending on what your infrastructure looks like concerning that disconnecting forwarder.
Hi @Vnarunart , yes, you can clone the old HF to a new one but, in addition, remember to change also the hostname in $SPLUNK_HOME/etc/system/loca/server.conf and $SPLUNK_HOME/etc/system/loca/inputs.... See more...
Hi @Vnarunart , yes, you can clone the old HF to a new one but, in addition, remember to change also the hostname in $SPLUNK_HOME/etc/system/loca/server.conf and $SPLUNK_HOME/etc/system/loca/inputs.conf. Anyway, having a Deployment Server, you could create a new Splunk installation and manage both the HFs with the DS deploying the same apps. Ciao. Giuseppe
Again - it depends whether by "migrate" you mean just replace the box and leave everything as it was before (IP, name, storage layout) or are you planning any changes. Do you want to stay with the sa... See more...
Again - it depends whether by "migrate" you mean just replace the box and leave everything as it was before (IP, name, storage layout) or are you planning any changes. Do you want to stay with the same underlying OS or do you plan to migrate, for example, from debian to RH? How was your system installed? A dpkg/rpm package? A simple unpack from tgz? A docker container?
Hi Splunk Community, I’ve set up Azure Firewall logging, selecting all firewall logs and archiving them to a storage account (Event Hub was avoided due to cost concerns). The configuration steps tak... See more...
Hi Splunk Community, I’ve set up Azure Firewall logging, selecting all firewall logs and archiving them to a storage account (Event Hub was avoided due to cost concerns). The configuration steps taken are as follows: Log Archival: All Azure Firewall logs are set to archive in a storage account Microsoft Cloud Add-On I added the storage account to the Microsoft Cloud Add-On using the secret key with the following permissions: Input/Action API Permissions Role (IAM) Default Sourcetype(s) / Sources Azure Storage Table Azure Storage Blob N/A Access key  OR Shared Access Signature:   - Allowed services: Blob, Table   - Allowed resource types: Service, Container, Object   - Allowed permissions: Read, List N/A mscs:storage:blob (Received this) mscs:storage:blob:json mscs:storage:blob:xml mscs:storage:table We are receiving events from the source files in JSON format, but there are two issues: Field Extraction: Critical fields such as protocol, action, source, destination, etc., are not being identified. Incomplete Logs: Logs appear truncated, starting with partial data (e.g., “urceID:…”) and missing “Reso,” which implies dropped or incomplete events (As far as I understand) Few logs were received compared to the traffic on Azure Firewall. Attached is a piece of logs showing errors as mentioned in the question. ________________________________________________________________ Environment Details:  • Log Collector: Heavy Forwarder (HF) hosted in Azure. • Data Flow: Logs are being forwarded to Splunk Cloud    Questions: Can it be an issue with using storage accounts and not event-hub? Could the incomplete logs be due to a configuration issue with the Microsoft Cloud Add-On or possibly related to the data transfer between the storage account and Splunk? Has anyone encountered similar issues with field extraction from Azure Firewall JSON logs? Ultimate Goal: Receive Azure Firewall Logs with fields extracted as any other firewall logs received by Syslog (Fortinet for example) Any guidance or troubleshooting suggestions would be much appreciated!