OK. So this is not about the searching itself but rather about the base/post-process search functionality within the dashboard. It's a completely different topic. Base search should be a reporting se...
See more...
OK. So this is not about the searching itself but rather about the base/post-process search functionality within the dashboard. It's a completely different topic. Base search should be a reporting search and should not return an overly huge number of results. Otherwise you might get into some unpredictable results (and there was definitely something about specifying a list of fields but I can't recall the details). Anyway, it's usually not a good practice to return a raw list of events from the base search and then postprocess it with stats as the "refining" search. The approach should be to generate all (possibly relatively fairly detailed) stats in the base search and aggregate them the way you want in the post-process search.
I have a number of log-rotated files for mail.log in the /var/log folder on a unix system. The /var/log/mail.log file gets ingested just fine, so I know permissions aren't an issue. However, I'd like...
See more...
I have a number of log-rotated files for mail.log in the /var/log folder on a unix system. The /var/log/mail.log file gets ingested just fine, so I know permissions aren't an issue. However, I'd like to also ingest the older data that was log-rotated, but for the purpose of ingesting, those files were untarred again, so I have mail.log.1 to mail.log.4 I have tried numerous stanzas and regexes in the whitelist, but none lead to the older data getting ingested. The one I currently have in place is: [monitor:///var/log/] index = postfix sourcetype = postfix_syslog whitelist = (mail\.log$|mail\.log\.\d+) Thanks for any suggestions in advance.
Hi @uagraw01, If there are too many files in that folder you can try adding "ignoreOlderThan" setting in monitor stanza; [monitor://\\WALVAU-SCADA-1\d$\CM\alarmreports\outgoing*]
disabled = false
...
See more...
Hi @uagraw01, If there are too many files in that folder you can try adding "ignoreOlderThan" setting in monitor stanza; [monitor://\\WALVAU-SCADA-1\d$\CM\alarmreports\outgoing*]
disabled = false
index = scada
host = WALVAU-SCADA-1
sourcetype = cm_scada_xml
ignoreOlderThan = 24h
Actually, there is _raw after transaction. It's comprised of merged values of _raw field of events making up the transaction. But the question is whether there are any events matching this condition...
See more...
Actually, there is _raw after transaction. It's comprised of merged values of _raw field of events making up the transaction. But the question is whether there are any events matching this condition. First think I'd check would be to search without the "NOT" condition and see if it matches any events at all.
Hello @PickleRick Thank you for your feedback, I will try to provide the maximum of details here: - We have a dashboard using simple searches, in single value panels, in every single value we ha...
See more...
Hello @PickleRick Thank you for your feedback, I will try to provide the maximum of details here: - We have a dashboard using simple searches, in single value panels, in every single value we have this kind of query : index=x sourcetype=z filter1=a filter2=b | stats dc(value) as nb_value - For optimization inqueries we had to use a base search containing the first part of the query, when called in a single value panel, it did not provide any result , so we defined the fields we wanted to extract with the fields command and applied the stats dc right after, we have noticed that we had less results (turned also into verbose mode) , when replaced the fields with table command we had the exact number. PS: we have no errors just noticed the big difference in results , we are in splunkcloud. Thank you
Hi @karthi2809, Since there is no _raw data after transaction command you cannot make free text searches. You should search using specific field like | search NOT message="*Failed Processing Concu...
See more...
Hi @karthi2809, Since there is no _raw data after transaction command you cannot make free text searches. You should search using specific field like | search NOT message="*Failed Processing Concur*"
Hi @Mrig342, You can use below eval function; | eval Used_Space=case(match(Used_Space,"M"),round(tonumber(replace(Used_Space,"M",""))/1024,2)."G",1=1,Used_Space)
Thanks in Advance. In my scenario i want to club the the result using correlationID .so i used transaction command .Below query have multiple conditions are checking from same field called message.S...
See more...
Thanks in Advance. In my scenario i want to club the the result using correlationID .so i used transaction command .Below query have multiple conditions are checking from same field called message.So i want to exclude some of the search string in this.So after the transaction i tried to exclude the search string but i am not getting the result. index="mulesoft" applicationName="concur" environment=DEV
("Concur Ondemand Started*") OR (message="Expense Extract Process started for jobName :*") OR ("Before Calling flow archive-Concur*") OR (message="Concur AP/GL File/s Process Status*") OR (message="Records Count Validation Passed*") OR (message="API: START: /v1/expense/extract/ondemand*" OR message="API: START: /v1/fin*") OR (message="Post - Expense Extract processing to Oracle*") | transaction correlationId| search NOT ("*Failed Processing Concur*")| rename content.SourceFileName as SourceFileName content.JobName as JobName content.loggerPayload.archiveFileName AS ArchivedFileName content.payload{} as Response content.Region as Region content.ConcurRunId as ConcurRunId content.HeaderCount as HeaderCount content.SourceFileDTLCount
as SourceFileDTLCount content.APRecordsCountStaged
as APRecordsCountStaged content.GLRecordsCountStaged
as GLRecordsCountStaged
| eval "FileName/JobName"= coalesce(SourceFileName,JobName)| eval JobType=case(like('message',"%Concur Ondemand Started%"),"OnDemand",like('message',"Expense Extract Process started%"),"Scheduled", true() , "Unknown")| eval Status=case(like('message' ,"%Concur AP/GL File/s Process Status%"),"SUCCESS", like('message',"%EXCEPTION%"),"ERROR")
|table correlationId "FileName/JobName" Status ArchivedFileName JobType Response Region ConcurRunId HeaderCount SourceFileDTLCount APRecordsCountStaged GLRecordsCountStaged
Hi All, I have logs like below in splunk: Log1: Tue Feb 25 04:00:20 2024 EST 10G 59M 1% /apps
Log2: Tue Feb 25 04:00:20 2024 EST 10G 6.4G 64% /logs
Log3: Tue Feb 25 04:00:20 2024 EST 10G 2G 20% ...
See more...
Hi All, I have logs like below in splunk: Log1: Tue Feb 25 04:00:20 2024 EST 10G 59M 1% /apps
Log2: Tue Feb 25 04:00:20 2024 EST 10G 6.4G 64% /logs
Log3: Tue Feb 25 04:00:20 2024 EST 10G 2G 20% /opt
Log4: Tue Feb 25 04:00:20 2024 EST 30G 282M 1% /var I have used the below query to extract the required fields: ... | rex field=_raw "EST\s(?P<Total_Space>[^\s]+)\s(?P<Used_Space>[^\s]+)\s(?P<Disk_Usage>[^%]+)\%\s(?P<File_System>[^\s]+)" Here, the output values of "Used_Space" field has both GB and MB values and I need to convert only MB values to GB. Please help to create a query to get the MB values only converted to GB. Your kind inputs are highly appreciated..!! Thank You..!!
I try to send the logs to Splunk via API that shown "Re-enter client secret" after I added the Tenant within 15 mins. and I installed the add-on in 2 servers that shown feature item unequal as pictu...
See more...
I try to send the logs to Splunk via API that shown "Re-enter client secret" after I added the Tenant within 15 mins. and I installed the add-on in 2 servers that shown feature item unequal as picture below Feature unequal in 2 servers error re-enter client secret
@gcusello Till to 10/30/2023 we received the events by using the same approach but the same I am using the same configuration settings but nothing worked at all.
No, there is no built-in command to find that. You could try to implement it in SPL and that could be an interesting exercise in itself but it would most probably _not_ be a very effective solution....
See more...
No, there is no built-in command to find that. You could try to implement it in SPL and that could be an interesting exercise in itself but it would most probably _not_ be a very effective solution. If you really need that you should probably implement it as an external command using python.
1. If you want to just count, you don't need to do either fields or table in the first place. 2. Your quesiton lacks details - actual searches run, results and possible warnings/errors you got, your...
See more...
1. If you want to just count, you don't need to do either fields or table in the first place. 2. Your quesiton lacks details - actual searches run, results and possible warnings/errors you got, your architecture. 3. Did you check the search logs? 4. How do you know which one is the correct result and what does that mean in this context?
1. The "restart" part of the cmdline does not mean that splunk is restarting. It's just how it was invoked. It's most probably running just fine. 2. As long as you're not running out of memory there...
See more...
1. The "restart" part of the cmdline does not mean that splunk is restarting. It's just how it was invoked. It's most probably running just fine. 2. As long as you're not running out of memory there's nothing to fix. Memory is to be used, not just lay around.
Hi, I checked the solution. Seems like there are still issues with this solution. As it will not show event logs generated 24 hours back but it is showing daily events which were generated 48 hours b...
See more...
Hi, I checked the solution. Seems like there are still issues with this solution. As it will not show event logs generated 24 hours back but it is showing daily events which were generated 48 hours back. Eg: if i check it today, it will not show logs sources generated on 26-feb but it will show logs sources generated on 25 feb date.
As @yuanliu already mentioned - we don't know your data (we can guess some parts of it from the names of the fields and our own overall experience but it's nowhere near as good as a described sample ...
See more...
As @yuanliu already mentioned - we don't know your data (we can guess some parts of it from the names of the fields and our own overall experience but it's nowhere near as good as a described sample (anonymized if needed)). We also don't know for sure what the search is supposed to be doing _exactly_. Again - we can make some guesses. Anyway, I can still see several things wrong with this search. First and foremost - the use of join command. This command has its limits and is best avoided whenever possible. It is good for some specific use cases and for them only. It's especially tricky when dealing with bigger datasets because it will silently finalize and return only partial results (if any) if you exceed its limits (run time or result count). Secondly, you whitelist the domains at the very end of your search. That's something that should be done as early as possible to limit the number of events processed further down the pipeline. Thirdly, while I think I can understand why you do the ut_parse_extended thingy, I don't see much point in this. Fourthly, you're appending a list of previously seen domain but we have no idea what fields are in that lookup. And there is so much more going on there... And this search is using a lot of relatively "heavy" commands...
Hi @jwhughes58, sorry but there's a strange thing in your question: you are speaking of Bloodhound Enterprise TA app, but this isn't a TA, it's an App that should be located in Search Heads, also b...
See more...
Hi @jwhughes58, sorry but there's a strange thing in your question: you are speaking of Bloodhound Enterprise TA app, but this isn't a TA, it's an App that should be located in Search Heads, also because usually KV-Store is disabled on Heavy Forwarders. So you should use the HF for inputs and then use the App in your Search Heads. In other words, usually KV-Store isn't created in an HF and usually this feature is disabled on HFs. Ciao. Giuseppe
Hi @uagraw01 , as @PickleRick said, check if the user you're using to run Splunk has the grants to access the shared folder, Then think to use a Universal Forwarder on the server that has the share...
See more...
Hi @uagraw01 , as @PickleRick said, check if the user you're using to run Splunk has the grants to access the shared folder, Then think to use a Universal Forwarder on the server that has the shared folder: is more sure and efficient. Ciao. Giuseppe
Hi everyone, I would like to restart and apply the rtsearch role to my sc_admin on my free-trial but I cannot sumbit a ticket with the forms. Do you have any solution for me please ? ...
See more...
Hi everyone, I would like to restart and apply the rtsearch role to my sc_admin on my free-trial but I cannot sumbit a ticket with the forms. Do you have any solution for me please ?