Hello,
I have a problem because I can't see the windows logs in splunk cloud.
My architecture is as follows: UF->HF->Splunk cloud
I get the logs on the HF because I see them by doing p...
See more...
Hello,
I have a problem because I can't see the windows logs in splunk cloud.
My architecture is as follows: UF->HF->Splunk cloud
I get the logs on the HF because I see them by doing packet inspection with tcpdump. So I have 9997 open, but these are not being forwarded to the cloud.
These are my inputs.conf
/opt/splunk/etc/apps/Splunk_TA_windows/local/
###### OS Logs ######
[WinEventLog://Application]
disabled = 0
index=mx_windows
start_from = oldest
current_only = 0
checkpointInterval = 5
renderXml=true
[WinEventLog://Security]
disabled = 0
index=mx_windows
start_from = oldest
current_only = 0
evt_resolve_ad_obj = 1
checkpointInterval = 5
blacklist1 = EventCode="4662" Message="Object Type:(?!\s*groupPolicyContainer)"
blacklist2 = EventCode="566" Message="Object Type:(?!\s*groupPolicyContainer)"
renderXml=true
[WinEventLog://System]
disabled = 0
index=mx_windows
start_from = oldest
current_only = 0
checkpointInterval = 5
renderXml=true
###### Forwarded WinEventLogs (WEF) ######
[WinEventLog://ForwardedEvents]
disabled = 0
start_from = oldest
current_only = 0
checkpointInterval = 5
## The addon supports only XML format for the collection of WinEventLogs using WEF, hence do not change the below renderXml parameter to false.
renderXml=true
host=WinEventLogForwardHost
index=mx_windows
/opt/splunk/etc/system/local/inputs.conf
[splunktcp://9997]
index=mx_windows
disabled = 0
[WinEventLog://ForwardedEvents]
index=mx_windows
disabled = 0
Hi @Amadou, as I said, you have to find the conditions to search (in other words the words or strings or field values to search) then you can use the stats command to find the number of occurrences...
See more...
Hi @Amadou, as I said, you have to find the conditions to search (in other words the words or strings or field values to search) then you can use the stats command to find the number of occurrences grouped e.g. for host and user. e.g. in windows if you want an alert with log failed greater than 5, you could run: index=wineventlog EventCode=4625
| stats count BY host user
| where count>5 Ciao. Giuseppe
Hi @dataisbeautiful, try to add eval after the timechart: index=indx sourcetype=src (Instrument="a" OR Instrument="b")
| timechart values(a) values(b) span=1s
| eval c = a - b Ciao. Giuseppe
Hi all I'd like to plot the difference between two values on a timechart Example data: _time a b t 10 1 t+1s 11 1.5 t+2s 12 2 Expected resulting data time a b c ...
See more...
Hi all I'd like to plot the difference between two values on a timechart Example data: _time a b t 10 1 t+1s 11 1.5 t+2s 12 2 Expected resulting data time a b c t 10 1 9 t+1s 11 1.5 9.5 t+2s 12 2 10 I'm using the query index=indx sourcetype=src (Instrument="a" OR Instrument="b")
| eval c = a - b
| timechart values(a) values(b) values(c) span=1s Any ideas where I'm going wrong?
I'm using the global time in a dashboard search as suggested above: "queryParameters": {
"earliest": "$global_time.earliest$",
"latest": "$glo...
See more...
I'm using the global time in a dashboard search as suggested above: "queryParameters": {
"earliest": "$global_time.earliest$",
"latest": "$global_time.latest$"
} It works fine if the user selects presets or relative time. But if the user picks a date range, I get an error like this: Any ideas on how to avoid this date format issue?
How should I refine this query so that I can get every fields in one table without using join or append or any other sub search.
(index=whcrm OR index=whcrm_int)sourcetype="bmw-sl-gcdm-int-api" ("S...
See more...
How should I refine this query so that I can get every fields in one table without using join or append or any other sub search.
(index=whcrm OR index=whcrm_int)sourcetype="bmw-sl-gcdm-int-api" ("Sending POST consents to *" OR "Create / Update Consents done" OR "Error in sync-consent-dataFlow:*")
| stats count(eval(match(_raw, "Sending POST consents to *"))) as Total,
count(eval(match(_raw, "Create / Update Consents done"))) as Success,
count(eval(match(_raw, "Error in sync-consent-dataFlow:*"))) as Error
| eval ErrorRate = round((Error / TotalReceived) * 100, 2)
| table Total, Success, Error, ErrorRate
| append
[ search (index=whcrm OR index=whcrm_int) (sourcetype="bmw-sl-gcdm-int-api" ("Sending POST consents to *" OR "Create / Update Consents done" OR "Error in sync-consent-dataFlow:*"))
| rex field=message ": (?<json>\{[\w\W]*\})$"
| rename properties.correlationId as correlationId
| rename properties.gcid as GCID
| rename properties.gcid as errorcode
| rename properties.entity as entity
| rename properties.country as country
| rename properties.targetSystem as target_system
| table correlationId GCID errorcode entity country target_system
]
I am trying to forward data from UF to few indexers but the indexes have dynamic IPs which keep changing. Now, how does the UF know where to forward the data How can I tackle this problem? Also, ca...
See more...
I am trying to forward data from UF to few indexers but the indexes have dynamic IPs which keep changing. Now, how does the UF know where to forward the data How can I tackle this problem? Also, can someone explain what is a smartstore & how does it work?
Hi it seems that when you are using output_mode=json those f=xyz didn't work. Instead of those you must use jq as @deepakc already propose. curl -ksu $UP 'https://localhost:8089/servicesNS/-/-/admi...
See more...
Hi it seems that when you are using output_mode=json those f=xyz didn't work. Instead of those you must use jq as @deepakc already propose. curl -ksu $UP 'https://localhost:8089/servicesNS/-/-/admin/macros?count=4&output_mode=json' | jq '.entry[].name'
"3cx_supply_chain_attack_network_indicators_filter"
"7zip_commandline_to_smb_share_path_filter"
"abnormally_high_aws_instances_launched_by_user___mltk_filter"
"abnormally_high_aws_instances_launched_by_user_filter" You could/should leave comment on doc page where output_mode has defined and add information that if you are using json mode then f=xyz doesn't work. Doc team is really helpful to update that kind of notes into real documentation. r. Ismo
Here is what I found. When using that connection type we needed to add an option authenticationScheme=NTLM (which enables NTLMv2 authentication) and then in our environment we made sure SSL is enable...
See more...
Here is what I found. When using that connection type we needed to add an option authenticationScheme=NTLM (which enables NTLMv2 authentication) and then in our environment we made sure SSL is enabled (encrypt=true) and added the option trustServerCertificate=true. After that, the connection could be saved and worked fine.
Hi it seems that this feature has documented on Splunk Enterprise REST API User Manual only. I cannot find that manual for splunk cloud. I suppose that feature is now available for anyone else than ...
See more...
Hi it seems that this feature has documented on Splunk Enterprise REST API User Manual only. I cannot find that manual for splunk cloud. I suppose that feature is now available for anyone else than users which have admin role? In SCP that role is restricted for Splunk Cloud Ops team only, not for any customers. If need you can create a support ticket and ask is this a valid assumption? r. Ismo
Hi @PickleRick As i tried by condition .So its showing as by the correlationId.But i want to show the count of lastRunTime field count.If i use lastRunTime the chart will show all the counts.But i...
See more...
Hi @PickleRick As i tried by condition .So its showing as by the correlationId.But i want to show the count of lastRunTime field count.If i use lastRunTime the chart will show all the counts.But i need to club all the values into one.Below the query and i need to show values as LastRunTimeCount - 79 in the pie chart content..lastRunTime="*" content.lastRunTime!="NA"
[search index="Test" applicationName="scheduler" content.lastRunTime="*" content.lastRunTime!="NA" | stats latest(correlationId) as correlationId | table correlationId|format]|rename content.lastRunTime as LastRunTimeCount | stats Count(LastRunTimeCount) as total by correlationId
Hi this is quite common question and you could found lot of answers to it by google/bing or what ever you want to use. Here is some links to you SlackbotSlackbot There are a lot of options fo...
See more...
Hi this is quite common question and you could found lot of answers to it by google/bing or what ever you want to use. Here is some links to you SlackbotSlackbot There are a lot of options for finding hosts or sources that stop submitting events: Meta Woot! https://splunkbase.splunk.com/app/2949/ TrackMe https://splunkbase.splunk.com/app/4621/ Broken Hosts App for Splunk https://splunkbase.splunk.com/app/3247/ Alerts for Splunk Admins ("ForwarderLevel" alerts) https://splunkbase.splunk.com/app/3796/ Monitoring Console https://docs.splunk.com/Documentation/Splunk/latest/DMC/Configureforwardermonitoring Deployment Server https://docs.splunk.com/Documentation/DepMon/latest/DeployDepMon/Troubleshootyourdeployment#Forwarder_warningsSome helpful posts: https://lantern.splunk.com/hc/en-us/articles/360048503294-Hosts-logging-data-in-a-certain-timeframe https://www.duanewaddle.com/proving-a-negative/ r. Ismo
Thanks for your help Combining the data sets using "| stats values(*) as * by Account_Name" I was able to get what I'm looking for:
(index="wineventlog" AND sourcetype="wineventlog" AND Ev...
See more...
Thanks for your help Combining the data sets using "| stats values(*) as * by Account_Name" I was able to get what I'm looking for:
(index="wineventlog" AND sourcetype="wineventlog" AND EventCode=4740) OR
(index="activedirectory" AND sourcetype="ActiveDirectory" AND sAMAccountName=* AND OU="Test Users")
| eval Account_Name = lower( coalesce( Account_Name, sAMAccountName))
| search Account_Name=*
| stats values(*) as * by Account_Name
| where EventCode=4740 AND OU="Test Users"
| fields Account_Name EventCode OU
I'm working on splunk data feed outage alert: The following data feed has been detected down: Index=a sourcetype=splunkd host=b. Is there someone can point me to the right direction of troublesh...
See more...
I'm working on splunk data feed outage alert: The following data feed has been detected down: Index=a sourcetype=splunkd host=b. Is there someone can point me to the right direction of troubleshooting this issue. Thanks a lot.
If you haven't implemented reading and queuing HEC acs then it cannot work. You definitely will lost some events without that implementation. Also even you have implemented it with LB deployed you pr...
See more...
If you haven't implemented reading and queuing HEC acs then it cannot work. You definitely will lost some events without that implementation. Also even you have implemented it with LB deployed you probably will got some duplicate events as it's not 100% that you will check ack from that individual HF/HEC where you have sent original event. I'm not sure if HEC ack implement also HF level ack into use? Personally I will enable it manually. As I said, if I use HEC ack I also enable ack on inputs.conf on the whole path from HEC node to all indexers. If your HF will crash before HEC client has read that then your client should sent those events again and you will get duplicates. Same situation if you have many HF behind LB and sticky sessions didn't work or any HF will crash/stop serving. You should implement your HEC client so, that there is some timeout for preventing it to wait forever. Just after timeout has reached it will send that event again. There will be come situations when you will never get that ack for individual event!
Okay, good point, I must have left my brain somewhere far away... Indeed, max(bytes) is 47KB and avg is 2KB, less than 1MB! Thank you all for your responsiveness.
Agent-based: Use the Splunk OpenTelemetry Collector ( link ) or the Splunk UniversalForwarder ( link ) Agent-less: Use the Splunk Add-On for AWS ( link ) it calls the AWS REST API.