We are writing Log Statements in Java, and then reviewing the info and exception alerts. Our team is then conducting a Splunk Search count of log statements by Category. Many of our log statements...
See more...
We are writing Log Statements in Java, and then reviewing the info and exception alerts. Our team is then conducting a Splunk Search count of log statements by Category. Many of our log statements can have share multiple categories. Using this reference for key-value pair, https://dev.splunk.com/enterprise/docs/developapps/addsupport/logging/loggingbestpractices/ So in our log statements, We are doing LOG.info("CategoryA=true , CategoryG=true"); Of course, we aren't going to write "Category=false" in any logger, since its inherent in the statement. Is this a overall good method to count values in Splunk by Category, or do you recommend a better practice?
Hi @avii7326, sorry but I don't understand the purpose of this search: you have the same search in the first part, with results aggregated using statsm so in one row you have three values Total Suc...
See more...
Hi @avii7326, sorry but I don't understand the purpose of this search: you have the same search in the first part, with results aggregated using statsm so in one row you have three values Total Success and Error. Then in the append search, using the same search, you have many events listed with the table command. And there isn't any correlation between the two parts of the search. What's the output that you would have? Ciao. Giuseppe
Hi @Shubham.Kadam,
I hear you have a call this Friday with AppDynamics. Can you share any learnings from that call here as a reply as it relates to the question you asked.
Hi @sajo.sam,
Did you see the reply from @Rajesh.Ganapavarapu? Can you confirm if it helped? If it did, click the "Accept as Solution" button, if not, continue the conversation
Hello,
I have a problem because I can't see the windows logs in splunk cloud.
My architecture is as follows: UF->HF->Splunk cloud
I get the logs on the HF because I see them by doing p...
See more...
Hello,
I have a problem because I can't see the windows logs in splunk cloud.
My architecture is as follows: UF->HF->Splunk cloud
I get the logs on the HF because I see them by doing packet inspection with tcpdump. So I have 9997 open, but these are not being forwarded to the cloud.
These are my inputs.conf
/opt/splunk/etc/apps/Splunk_TA_windows/local/
###### OS Logs ######
[WinEventLog://Application]
disabled = 0
index=mx_windows
start_from = oldest
current_only = 0
checkpointInterval = 5
renderXml=true
[WinEventLog://Security]
disabled = 0
index=mx_windows
start_from = oldest
current_only = 0
evt_resolve_ad_obj = 1
checkpointInterval = 5
blacklist1 = EventCode="4662" Message="Object Type:(?!\s*groupPolicyContainer)"
blacklist2 = EventCode="566" Message="Object Type:(?!\s*groupPolicyContainer)"
renderXml=true
[WinEventLog://System]
disabled = 0
index=mx_windows
start_from = oldest
current_only = 0
checkpointInterval = 5
renderXml=true
###### Forwarded WinEventLogs (WEF) ######
[WinEventLog://ForwardedEvents]
disabled = 0
start_from = oldest
current_only = 0
checkpointInterval = 5
## The addon supports only XML format for the collection of WinEventLogs using WEF, hence do not change the below renderXml parameter to false.
renderXml=true
host=WinEventLogForwardHost
index=mx_windows
/opt/splunk/etc/system/local/inputs.conf
[splunktcp://9997]
index=mx_windows
disabled = 0
[WinEventLog://ForwardedEvents]
index=mx_windows
disabled = 0
Hi @Amadou, as I said, you have to find the conditions to search (in other words the words or strings or field values to search) then you can use the stats command to find the number of occurrences...
See more...
Hi @Amadou, as I said, you have to find the conditions to search (in other words the words or strings or field values to search) then you can use the stats command to find the number of occurrences grouped e.g. for host and user. e.g. in windows if you want an alert with log failed greater than 5, you could run: index=wineventlog EventCode=4625
| stats count BY host user
| where count>5 Ciao. Giuseppe
Hi @dataisbeautiful, try to add eval after the timechart: index=indx sourcetype=src (Instrument="a" OR Instrument="b")
| timechart values(a) values(b) span=1s
| eval c = a - b Ciao. Giuseppe
Hi all I'd like to plot the difference between two values on a timechart Example data: _time a b t 10 1 t+1s 11 1.5 t+2s 12 2 Expected resulting data time a b c ...
See more...
Hi all I'd like to plot the difference between two values on a timechart Example data: _time a b t 10 1 t+1s 11 1.5 t+2s 12 2 Expected resulting data time a b c t 10 1 9 t+1s 11 1.5 9.5 t+2s 12 2 10 I'm using the query index=indx sourcetype=src (Instrument="a" OR Instrument="b")
| eval c = a - b
| timechart values(a) values(b) values(c) span=1s Any ideas where I'm going wrong?
I'm using the global time in a dashboard search as suggested above: "queryParameters": {
"earliest": "$global_time.earliest$",
"latest": "$glo...
See more...
I'm using the global time in a dashboard search as suggested above: "queryParameters": {
"earliest": "$global_time.earliest$",
"latest": "$global_time.latest$"
} It works fine if the user selects presets or relative time. But if the user picks a date range, I get an error like this: Any ideas on how to avoid this date format issue?
How should I refine this query so that I can get every fields in one table without using join or append or any other sub search.
(index=whcrm OR index=whcrm_int)sourcetype="bmw-sl-gcdm-int-api" ("S...
See more...
How should I refine this query so that I can get every fields in one table without using join or append or any other sub search.
(index=whcrm OR index=whcrm_int)sourcetype="bmw-sl-gcdm-int-api" ("Sending POST consents to *" OR "Create / Update Consents done" OR "Error in sync-consent-dataFlow:*")
| stats count(eval(match(_raw, "Sending POST consents to *"))) as Total,
count(eval(match(_raw, "Create / Update Consents done"))) as Success,
count(eval(match(_raw, "Error in sync-consent-dataFlow:*"))) as Error
| eval ErrorRate = round((Error / TotalReceived) * 100, 2)
| table Total, Success, Error, ErrorRate
| append
[ search (index=whcrm OR index=whcrm_int) (sourcetype="bmw-sl-gcdm-int-api" ("Sending POST consents to *" OR "Create / Update Consents done" OR "Error in sync-consent-dataFlow:*"))
| rex field=message ": (?<json>\{[\w\W]*\})$"
| rename properties.correlationId as correlationId
| rename properties.gcid as GCID
| rename properties.gcid as errorcode
| rename properties.entity as entity
| rename properties.country as country
| rename properties.targetSystem as target_system
| table correlationId GCID errorcode entity country target_system
]
I am trying to forward data from UF to few indexers but the indexes have dynamic IPs which keep changing. Now, how does the UF know where to forward the data How can I tackle this problem? Also, ca...
See more...
I am trying to forward data from UF to few indexers but the indexes have dynamic IPs which keep changing. Now, how does the UF know where to forward the data How can I tackle this problem? Also, can someone explain what is a smartstore & how does it work?
Hi it seems that when you are using output_mode=json those f=xyz didn't work. Instead of those you must use jq as @deepakc already propose. curl -ksu $UP 'https://localhost:8089/servicesNS/-/-/admi...
See more...
Hi it seems that when you are using output_mode=json those f=xyz didn't work. Instead of those you must use jq as @deepakc already propose. curl -ksu $UP 'https://localhost:8089/servicesNS/-/-/admin/macros?count=4&output_mode=json' | jq '.entry[].name'
"3cx_supply_chain_attack_network_indicators_filter"
"7zip_commandline_to_smb_share_path_filter"
"abnormally_high_aws_instances_launched_by_user___mltk_filter"
"abnormally_high_aws_instances_launched_by_user_filter" You could/should leave comment on doc page where output_mode has defined and add information that if you are using json mode then f=xyz doesn't work. Doc team is really helpful to update that kind of notes into real documentation. r. Ismo
Here is what I found. When using that connection type we needed to add an option authenticationScheme=NTLM (which enables NTLMv2 authentication) and then in our environment we made sure SSL is enable...
See more...
Here is what I found. When using that connection type we needed to add an option authenticationScheme=NTLM (which enables NTLMv2 authentication) and then in our environment we made sure SSL is enabled (encrypt=true) and added the option trustServerCertificate=true. After that, the connection could be saved and worked fine.
Hi it seems that this feature has documented on Splunk Enterprise REST API User Manual only. I cannot find that manual for splunk cloud. I suppose that feature is now available for anyone else than ...
See more...
Hi it seems that this feature has documented on Splunk Enterprise REST API User Manual only. I cannot find that manual for splunk cloud. I suppose that feature is now available for anyone else than users which have admin role? In SCP that role is restricted for Splunk Cloud Ops team only, not for any customers. If need you can create a support ticket and ask is this a valid assumption? r. Ismo
Hi @PickleRick As i tried by condition .So its showing as by the correlationId.But i want to show the count of lastRunTime field count.If i use lastRunTime the chart will show all the counts.But i...
See more...
Hi @PickleRick As i tried by condition .So its showing as by the correlationId.But i want to show the count of lastRunTime field count.If i use lastRunTime the chart will show all the counts.But i need to club all the values into one.Below the query and i need to show values as LastRunTimeCount - 79 in the pie chart content..lastRunTime="*" content.lastRunTime!="NA"
[search index="Test" applicationName="scheduler" content.lastRunTime="*" content.lastRunTime!="NA" | stats latest(correlationId) as correlationId | table correlationId|format]|rename content.lastRunTime as LastRunTimeCount | stats Count(LastRunTimeCount) as total by correlationId
Hi this is quite common question and you could found lot of answers to it by google/bing or what ever you want to use. Here is some links to you SlackbotSlackbot There are a lot of options fo...
See more...
Hi this is quite common question and you could found lot of answers to it by google/bing or what ever you want to use. Here is some links to you SlackbotSlackbot There are a lot of options for finding hosts or sources that stop submitting events: Meta Woot! https://splunkbase.splunk.com/app/2949/ TrackMe https://splunkbase.splunk.com/app/4621/ Broken Hosts App for Splunk https://splunkbase.splunk.com/app/3247/ Alerts for Splunk Admins ("ForwarderLevel" alerts) https://splunkbase.splunk.com/app/3796/ Monitoring Console https://docs.splunk.com/Documentation/Splunk/latest/DMC/Configureforwardermonitoring Deployment Server https://docs.splunk.com/Documentation/DepMon/latest/DeployDepMon/Troubleshootyourdeployment#Forwarder_warningsSome helpful posts: https://lantern.splunk.com/hc/en-us/articles/360048503294-Hosts-logging-data-in-a-certain-timeframe https://www.duanewaddle.com/proving-a-negative/ r. Ismo