All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @karthi2809, Since there is no _raw data after transaction command you cannot make free text searches. You should search using specific field like  | search NOT message="*Failed Processing Concu... See more...
Hi @karthi2809, Since there is no _raw data after transaction command you cannot make free text searches. You should search using specific field like  | search NOT message="*Failed Processing Concur*"  
Hi @Mrig342, You can use below eval function; | eval Used_Space=case(match(Used_Space,"M"),round(tonumber(replace(Used_Space,"M",""))/1024,2)."G",1=1,Used_Space)
Thanks in Advance. In my scenario i want to club the the result using correlationID .so i used transaction command .Below query have multiple conditions are checking from same field called message.S... See more...
Thanks in Advance. In my scenario i want to club the the result using correlationID .so i used transaction command .Below query have multiple conditions are checking from same field called message.So i want to exclude some of the search string in this.So after the transaction i tried to exclude the search string but i am not getting the result. index="mulesoft" applicationName="concur" environment=DEV ("Concur Ondemand Started*") OR (message="Expense Extract Process started for jobName :*") OR ("Before Calling flow archive-Concur*") OR (message="Concur AP/GL File/s Process Status*") OR (message="Records Count Validation Passed*") OR (message="API: START: /v1/expense/extract/ondemand*" OR message="API: START: /v1/fin*") OR (message="Post - Expense Extract processing to Oracle*") | transaction correlationId| search NOT ("*Failed Processing Concur*")| rename content.SourceFileName as SourceFileName content.JobName as JobName content.loggerPayload.archiveFileName AS ArchivedFileName content.payload{} as Response content.Region as Region content.ConcurRunId as ConcurRunId content.HeaderCount as HeaderCount content.SourceFileDTLCount as SourceFileDTLCount content.APRecordsCountStaged as APRecordsCountStaged content.GLRecordsCountStaged as GLRecordsCountStaged | eval "FileName/JobName"= coalesce(SourceFileName,JobName)| eval JobType=case(like('message',"%Concur Ondemand Started%"),"OnDemand",like('message',"Expense Extract Process started%"),"Scheduled", true() , "Unknown")| eval Status=case(like('message' ,"%Concur AP/GL File/s Process Status%"),"SUCCESS", like('message',"%EXCEPTION%"),"ERROR") |table correlationId "FileName/JobName" Status ArchivedFileName JobType Response Region ConcurRunId HeaderCount SourceFileDTLCount APRecordsCountStaged GLRecordsCountStaged  
Hi All,   I have logs like below in splunk: Log1: Tue Feb 25 04:00:20 2024 EST 10G 59M 1% /apps Log2: Tue Feb 25 04:00:20 2024 EST 10G 6.4G 64% /logs Log3: Tue Feb 25 04:00:20 2024 EST 10G 2G 20% ... See more...
Hi All,   I have logs like below in splunk: Log1: Tue Feb 25 04:00:20 2024 EST 10G 59M 1% /apps Log2: Tue Feb 25 04:00:20 2024 EST 10G 6.4G 64% /logs Log3: Tue Feb 25 04:00:20 2024 EST 10G 2G 20% /opt Log4: Tue Feb 25 04:00:20 2024 EST 30G 282M 1% /var  I have used the below query to extract the required fields: ... | rex field=_raw "EST\s(?P<Total_Space>[^\s]+)\s(?P<Used_Space>[^\s]+)\s(?P<Disk_Usage>[^%]+)\%\s(?P<File_System>[^\s]+)" Here, the output values of "Used_Space" field has both GB and MB values and I need to convert only MB values to GB.  Please help to create a query to get the MB values only converted to GB.   Your kind inputs are highly appreciated..!! Thank You..!!
I try to send the logs to Splunk via API that shown "Re-enter client secret" after I added the Tenant within 15 mins.  and I installed the add-on in 2 servers that shown feature item unequal as pictu... See more...
I try to send the logs to Splunk via API that shown "Re-enter client secret" after I added the Tenant within 15 mins.  and I installed the add-on in 2 servers that shown feature item unequal as picture below Feature unequal in 2 servers error re-enter client secret
@gcusello  Till to 10/30/2023 we received the events by using the same approach but the same I am using the same configuration settings but nothing worked at all.  
Logic was worked. Thank you  so much 
No, there is no built-in command to find that. You could try to implement it in SPL and that could be an interesting exercise in itself but it would most probably _not_ be a very effective solution.... See more...
No, there is no built-in command to find that. You could try to implement it in SPL and that could be an interesting exercise in itself but it would most probably _not_ be a very effective solution. If you really need that you should probably implement it as an external command using python.
1. If you want to just count, you don't need to do either fields or table in the first place. 2. Your quesiton lacks details - actual searches run, results and possible warnings/errors you got, your... See more...
1. If you want to just count, you don't need to do either fields or table in the first place. 2. Your quesiton lacks details - actual searches run, results and possible warnings/errors you got, your architecture. 3. Did you check the search logs? 4. How do you know which one is the correct result and what does that mean in this context?  
1. The "restart" part of the cmdline does not mean that splunk is restarting. It's just how it was invoked. It's most probably running just fine. 2. As long as you're not running out of memory there... See more...
1. The "restart" part of the cmdline does not mean that splunk is restarting. It's just how it was invoked. It's most probably running just fine. 2. As long as you're not running out of memory there's nothing to fix. Memory is to be used, not just lay around.
Hi, I checked the solution. Seems like there are still issues with this solution. As it will not show event logs generated 24 hours back but it is showing daily events which were generated 48 hours b... See more...
Hi, I checked the solution. Seems like there are still issues with this solution. As it will not show event logs generated 24 hours back but it is showing daily events which were generated 48 hours back. Eg: if i check it today, it will not show logs sources generated on 26-feb but it will show logs sources generated on 25 feb date. 
As @yuanliu already mentioned - we don't know your data (we can guess some parts of it from the names of the fields and our own overall experience but it's nowhere near as good as a described sample ... See more...
As @yuanliu already mentioned - we don't know your data (we can guess some parts of it from the names of the fields and our own overall experience but it's nowhere near as good as a described sample (anonymized if needed)). We also don't know for sure what the search is supposed to be doing _exactly_. Again - we can make some guesses. Anyway, I can still see several things wrong with this search. First and foremost - the use of join command. This command has its limits and is best avoided whenever possible. It is good for some specific use cases and for them only. It's especially tricky when dealing with bigger datasets because it will silently finalize and return only partial results (if any) if you exceed its limits (run time or result count). Secondly, you whitelist the domains at the very end of your search. That's something that should be done as early as possible to limit the number of events processed further down the pipeline. Thirdly, while I think I can understand why you do the ut_parse_extended thingy, I don't see much point in this. Fourthly, you're appending a list of previously seen domain but we have no idea what fields are in that lookup. And there is so much more going on there... And this search is using a lot of relatively "heavy" commands...
Hi @jwhughes58, sorry but there's a strange thing in your question: you are speaking of Bloodhound Enterprise TA app, but this isn't a TA, it's an App that should be located in Search Heads, also b... See more...
Hi @jwhughes58, sorry but there's a strange thing in your question: you are speaking of Bloodhound Enterprise TA app, but this isn't a TA, it's an App that should be located in Search Heads, also because usually KV-Store is disabled on Heavy Forwarders. So you should use the HF for inputs and then use the App in your Search Heads. In other words, usually KV-Store isn't created in an HF and usually this feature is disabled on HFs. Ciao. Giuseppe
Hi @uagraw01 , as @PickleRick said, check if the user you're using to run Splunk has the grants to access the shared folder, Then think to use a Universal Forwarder on the server that has the share... See more...
Hi @uagraw01 , as @PickleRick said, check if the user you're using to run Splunk has the grants to access the shared folder, Then think to use a Universal Forwarder on the server that has the shared folder: is more sure and efficient. Ciao. Giuseppe
Hi everyone, I would like to restart and apply the rtsearch role to my sc_admin on my free-trial but I cannot sumbit a ticket with the forms. Do you have any solution for me please ? ... See more...
Hi everyone, I would like to restart and apply the rtsearch role to my sc_admin on my free-trial but I cannot sumbit a ticket with the forms. Do you have any solution for me please ?  
@gcusello  I have already tested by adding the below string to the monitoring stranza. But no luck was found.  
Hi @uagraw01 , please try to use this header in the inputs.conf stanza: [monitor://\\WALVAU-SCADA-1\d$\CM\alarmreports\outgoing\*.xml] Ciao. Giuseppe
There has always been a problem with parsing a "headered" structured data. There is even an open idea about it. https://ideas.splunk.com/ideas/EID-I-208 The easiest way to go about it would be proba... See more...
There has always been a problem with parsing a "headered" structured data. There is even an open idea about it. https://ideas.splunk.com/ideas/EID-I-208 The easiest way to go about it would be probably to parse the header into indexed fields if needed (most of it should already be parsed into _time and host; you could however want to have the process name and pid stored) and then strip the header completely with SEDCMD or INGEST_EVAL (I don't remember if SEDCMD works before or after transforms are called). This way you'd be left with an all-json event which Splunk can handle with proper KV_MODE.
1. It's a linux auditd log so you might want to use an add-on for auditd. That will make everyone's lives easier 2. How are you ingesting those logs? Locally by forwarder reading from /var/log/au... See more...
1. It's a linux auditd log so you might want to use an add-on for auditd. That will make everyone's lives easier 2. How are you ingesting those logs? Locally by forwarder reading from /var/log/audit/auditd.log? Sent with syslog over the network? Whatever?
Copy-pasted and missed that closing one. Good catch.