All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hmm... This is just a single event? You can's use the starting string to break the events because it appears in the middle of the event as well. So you'd have to go for something like [jlogs] #Thi... See more...
Hmm... This is just a single event? You can's use the starting string to break the events because it appears in the middle of the event as well. So you'd have to go for something like [jlogs] #This one assumes that this is _the_ timestamp for the event. #Otherwise it needs to be changed to match appropriate part of the event TIME_PREFIX = Entry\s+\d+\s+starting\sat #Watch out, this might get messy since you don't have timezone info! TIME_FORMAT = %d/%m/%Y %H:%M:%S #This needs to be relatively big (might need tweaking) since the timestamp is #relatively far down the event's contents MAX_TIMESTAMP_LOOKAHEAD = 200 #Don't merge lines. It's a performance killer SHOULD_LINEMERGE=false #Might need increasing if your events get truncated TRUNCATE = 10000 NO_BINARY_CHECK = 1 #It's not a well-formed known data format KV_MODE = none #We know that each event ends with a line saying "software Completed..." LINE_BREAKER=(?:[\r\n]+)software\sCompleted\sat\s[^\r\n]+\slocal time([\r\n]+) #We need the same settings as non-intuitively named EVENT_BREAKER because you #want the UFs to split your data into chunks in proper places EVENT_BREAKER=(?:[\r\n]+)software\sCompleted\sat\s[^\r\n]+\slocal time([\r\n]+) EVENT_BREAKER_ENABLE=true You should put this in props.conf on both your receiving indexer(s)/HF(s) and on your UF ingesting the file.
So I'm unable to get HEC logs into Splunk Cloud (version 9.1.2312.102). When I test the HECs in Postman via: (obviously didn't enter my domain or token for privacy reasons). POST  https://http-inpu... See more...
So I'm unable to get HEC logs into Splunk Cloud (version 9.1.2312.102). When I test the HECs in Postman via: (obviously didn't enter my domain or token for privacy reasons). POST  https://http-inputs-mydomain.splunkcloud.com:443/services/collector/raw with the Authorization Header of "Splunk mytoken" It works as expected and I receive a "text":Success , "code": 0 response, which is good.  I can also see the event in Splunk when I search it.  I did this invidivdually for each HEC that I've created, and they all work....however, whenever I go to setup the actual HECs via the applications I'm trying to integrate...I get nothing. I'm trying to send logs from Dashlane, FiveTran, Knowbe4, and OneTrust.  All of these support native Splunk integrations, I enter the information as requested on their external logging setup and nothing shows in Splunk.  I'm not sure what to do here. Any guidance would be awesome! Thanks in advance!  
The data comes from either the AD server or the Windows servers by the way of the Universal Forwarder, that's the source of the event logs.  You have data coming in from the AD server where a UF is ... See more...
The data comes from either the AD server or the Windows servers by the way of the Universal Forwarder, that's the source of the event logs.  You have data coming in from the AD server where a UF is installed and that's how the logs are collected , and the logs are configured by your AD admin, some times they need to enable further logging for advance events.  Try these first   and see if they exist as they may give you further info you need, if they don't , then it might be worth having a chat with your AD admin to find the exact event ID/log information you need.    Event ID 4771 - Kerberos pre-authentication failed. Event ID 644 - User account locked out. Event ID 4625 - An account failed to log on.   
Hi All, just started a new role and not been introduced to splunk in any previous jobs, and this is completly new to me. We have a user that is constantly getting account lockout issues. All our D... See more...
Hi All, just started a new role and not been introduced to splunk in any previous jobs, and this is completly new to me. We have a user that is constantly getting account lockout issues. All our Domain controller security logs etc are extracted into splunk every fifteen minutes. I am attempting to complete a search from the Splunk>enterprise --- New Search field but I can only extract the below information which tells me the user, source, and host and that the user has an Audit failure. Please could someone point me to how I would go about extracting the information of what machine the user is getting the account lock from. I see quite a few messages on the internet but they never say where the actual message should be input from. Is it directly into the New Search field.... Any help would be very much appreciated.    
Using these two searches because I want to extract some fields using that regular expression for that only I am appending it. I want help in this only so that I don't repeat this search two times and... See more...
Using these two searches because I want to extract some fields using that regular expression for that only I am appending it. I want help in this only so that I don't repeat this search two times and have one query in table with fields - total ,success, error, correlationid, GCID etc. Or If I am using wrong query you can suggest me how to proceed - I have that logs and have to count those logs for total ,success and error and these fields will be used if there will be any error to show the details of that error this GCID correlationId will be required. Please guide how can I proceed. Thanks in Advance
@testingtena  first identify the missing forwarder by using the below query.   index=_internal source="/opt/splunk/var/log/splunk/metrics.log*" sourcetype="splunkd" fwdType="*" | dedup sourceHost... See more...
@testingtena  first identify the missing forwarder by using the below query.   index=_internal source="/opt/splunk/var/log/splunk/metrics.log*" sourcetype="splunkd" fwdType="*" | dedup sourceHost | rename IPAddress AS hostip, sourceHost AS IPAddress, OS AS fOS | fields IPAddress, hostname, fGUID, fOS, fwdType. This will list information about connected forwarders based on logs.   there could be an issue with specific configuration files. Here's what to check: deploymentserver.conf on the deployment server: Ensure the configuration allows communication with UFs. inputs.conf on the UFs: Verify the stanza forwarding data to the deployment server is correct.
Hi @m92, using the above cron, you run your scheduled search at 12:00 and 19:00. Ciao. Giuseppe
Have a look at this example it may help, other than that work through the documentation splunk-app-examples/custom_alert_actions/slack_alerts/default/data/ui/alerts/slack.html at master · splunk/s... See more...
Have a look at this example it may help, other than that work through the documentation splunk-app-examples/custom_alert_actions/slack_alerts/default/data/ui/alerts/slack.html at master · splunk/splunk-app-examples · GitHub  
Hello @bofasplunkguy, I am in the same predicament as yours. Did you ever find an answer to your problem?
@jaibalaraman , your searches return a consistent set of results regardless of the time zone you are in.
hi @jaibalaraman ,    You can specify an exact time such as earliest="10/5/2021:20:00:00", or a relative time such as earliest=-h or latest=@w6. When specifying relative time, you can use the now ... See more...
hi @jaibalaraman ,    You can specify an exact time such as earliest="10/5/2021:20:00:00", or a relative time such as earliest=-h or latest=@w6. When specifying relative time, you can use the now modifier to refer to the current time.
Hello there, I also want to render splunk app's dashboard on my website securely, is there any way to render splunk app's dashboard on my web site, i have successfully access an existing dashboard X... See more...
Hello there, I also want to render splunk app's dashboard on my website securely, is there any way to render splunk app's dashboard on my web site, i have successfully access an existing dashboard XML definition as per follow this guideline data/ui/views/{name}. Now my Question is how to convert Splunk app's dashboard xml to HTML so we can show that dashboard on my website. Thanks for your support.
Hi, How do Splunk ES create incidents from notable events? I'm aware that a correlaction search in Splunk ES creates a notable event in the "notable" index, but exactly how does it get from here to ... See more...
Hi, How do Splunk ES create incidents from notable events? I'm aware that a correlaction search in Splunk ES creates a notable event in the "notable" index, but exactly how does it get from here to the "Incident Review" dashboard in Splunk ES? As far as I know the incidents exists in a KV store collection, and I would then assume that there is some scheduled job that take notable events from the "notable" index, and puts them in the KV store collection. The reason I'm asking is that we are missing incidents in our "Incident Review" dashboard, but the corresponding notable events exists in the notable index. So it looks like the "notable event to incident" job has failed somehow. Is this documented somewhere in more detail?
It looks like the file has not been indexed for a few days and in addition I found the below warnings: 05-01-2024 02:18:07.646 -0400 WARN TailReader [4549 tailreader0] - Insufficient permissions to ... See more...
It looks like the file has not been indexed for a few days and in addition I found the below warnings: 05-01-2024 02:18:07.646 -0400 WARN TailReader [4549 tailreader0] - Insufficient permissions to read file='/xxx/xxx/xxx/xxx/xxx.csv' (hint: No such file or directory , UID: 0, GID: 0). How can I go about checking the permissions? Thank you.
#your base search which produce the logs, ... like index=abc sourcetype=abc index=firewall sourcetype=abc | rex field=_raw "is\s(?P<ip>.*)" | table _raw ip | stats count by ip Hi @nsiva ..  if th... See more...
#your base search which produce the logs, ... like index=abc sourcetype=abc index=firewall sourcetype=abc | rex field=_raw "is\s(?P<ip>.*)" | table _raw ip | stats count by ip Hi @nsiva ..  if this search does not work, pls show us a screenshot.. thanks.   
So try this - think of this as version 1.0, its a bit of trial and error until you get it right. Tip - Its's always good practise to place the data into a test index first, to get it all working, onc... See more...
So try this - think of this as version 1.0, its a bit of trial and error until you get it right. Tip - Its's always good practise to place the data into a test index first, to get it all working, once good move to a production index, just change the inputs.conf post testing/props dev work to prod index.    props.conf  [jlogs] TIME_PREFIX = Entry\s\d+\sstarting\sat TIME_FORMAT = %d/%m/%Y %H:%M:%S BREAK_ONLY_BEFORE =([\r\n]+)\.net MUST_BREAK_AFTER =local\stime([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD = 50 SHOULD_LINEMERGE=true TRUNCATE = 10000 NO_BINARY_CHECK = 1 KV_MODE = auto #Remove unwanted headers or data #This config is no longer needed - left for reference #TRANSFORMS-null = remove_unwanted_data_from_jlog   See what this looks like. 
Hey @nsiva , The query that @inventsekar has posted will work with any of the ip address provided the raw event is  123 IP Address is 1.2.3.4  Can you please elaborate why the solution doesn't wor... See more...
Hey @nsiva , The query that @inventsekar has posted will work with any of the ip address provided the raw event is  123 IP Address is 1.2.3.4  Can you please elaborate why the solution doesn't work for you?  And for your reference, I've used 4.3.2.1 in _raw and it still extracts the ip address. Find the below screenshot.   To assist you better, it would be great if you can provide the raw events and then ip field can be extracted from the same. You can redact the sensitive information.   Thanks, Tejas.
I am using query as below  index="test" sourcetype="reports" | bin _time span=1m | stats values(a) as a values(b) as b values(c) as c values(d) as d values(e) as e values(f) as f values(g) as g by ... See more...
I am using query as below  index="test" sourcetype="reports" | bin _time span=1m | stats values(a) as a values(b) as b values(c) as c values(d) as d values(e) as e values(f) as f values(g) as g by par1, _time | append [search (index="test" sourcetype=reports_metadata) | table par1,par2,par3,par4,par5,par6,par7,par8,par9,par10,par11,par12] | eventstats values(par2) as par2,values(par3) as par3, values(par4) as par4, values(par5) as par5, values(par6) as par6, values(par7) as par7, values(par8) as par8,values(par9) as par9,values(par10) as par10,values(par11) as par11,values(par12) as par12, values(a) as a alues(b) as b values(c) as c values(d) as d values(e) as e values(f) as f values(g) as g by par1 | search par2 IN ("*") par3 IN ("*") par3 IN ("*") par4 ("*") par5 IN ("*") par6 IN ("*") par7 IN ("*") par8 IN ("*") par9 IN ("*") par10 IN ("*") | search par1="*"ar2 IN ("*") par3 IN ("*") par3 IN ("*") par4 ("*") par5 IN ("*") par6 IN ("*") par7 IN ("*") par8 IN ("*") par9 IN ("*") par10 IN ("*") par11 IN ("*") par12 IN ("*") | timechart span=15m values(a) by par1 limit=0 In this query, I am able to use any values rangin from a to g and plot a time series graph. I need help in plotting time series for one or more values and also how this value can be used to pick from a drop down filter  #timeseries #timechart #xyseries #multiseries #multivalue 
When the retention period is reduced, any buckets where all data is older than the new retention period will be frozen.  When archiving is enabled, freezing buckets means moving them to archive stora... See more...
When the retention period is reduced, any buckets where all data is older than the new retention period will be frozen.  When archiving is enabled, freezing buckets means moving them to archive storage.
Hello @kkarthik2 , Please consider migrating from Splunk app for AWS to the AWS Content Pack along with ITEW/ITSI. ITE Works is a free application that helps integrate and help visualize different A... See more...
Hello @kkarthik2 , Please consider migrating from Splunk app for AWS to the AWS Content Pack along with ITEW/ITSI. ITE Works is a free application that helps integrate and help visualize different AWS app dashboards. As per Splunkbase, the Splunk App for AWS is archived and has reached EOL. Following is the documentation link for migrating from Splunk App for AWS to Content Pack for AWS - https://docs.splunk.com/Documentation/CPAWSDash/1.4.0/CP/Migrate Also, the reason you see so much warnings is because the old app references to custom JS which is now unsupported by Splunk and hence such visualization will not come into effect on the dashboard.   Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated.