All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Zubair, I tested this on the sample data that you put and it seems to work. Give it a shot and tell me if it works for you   [json_test] SHOULD_LINEMERGE=false LINE_BREAKER=([,\r\n]+){ C... See more...
Hello Zubair, I tested this on the sample data that you put and it seems to work. Give it a shot and tell me if it works for you   [json_test] SHOULD_LINEMERGE=false LINE_BREAKER=([,\r\n]+){ CHARSET=AUTO TIME_PREFIX="event_time"\:\s MAX_TIMESTAMP_LOOKAHEAD=13 SEDCMD-removestart=s/^{[\s\S]*?\s*\[// SEDCMD-removeend=s/],\r\n"count[\s\S]*\r\n}// kv_mode=json
@richgalloway  Thanks for your reply, unfortunately I still have no luck. By the looks of it I'm not receiving any sourcetypes in splunk. I saw my typo mistake later but still wasn't able to receive... See more...
@richgalloway  Thanks for your reply, unfortunately I still have no luck. By the looks of it I'm not receiving any sourcetypes in splunk. I saw my typo mistake later but still wasn't able to receive any kind of data regarding wineventlogging.  Any other suggestions what could be the issue?
This got me on the right track and let me to the following:
Thanks for your response @isoutamo and @PickleRick and totally agree, there is more to Splunk deployment than just initial configuration. This is for a small lab (10-15 UFs) and can't afford to hire ... See more...
Thanks for your response @isoutamo and @PickleRick and totally agree, there is more to Splunk deployment than just initial configuration. This is for a small lab (10-15 UFs) and can't afford to hire help. For now, I want compile list of steps one should do to have a initial configuration ready.  BTW, I read somewhere, FIPS for Splunk is only supported on Linux systems and not on Windows, is that correct?
I mean the default value option is literally right at the bottom of the image you posted.  So that is how you set the default value of that token before any event can manipulate the expected outcome ... See more...
I mean the default value option is literally right at the bottom of the image you posted.  So that is how you set the default value of that token before any event can manipulate the expected outcome value. I'm hoping you are actually experiencing something more complicated and that maybe I don't fully understand your use case yet.  But really any other outcome means the value is conditionally set due to some other event occurring so I don't know how to advise.
It looks like there's a typo in the hostname in the query.  Try host=*.  You can confirm a sourcetype was received using this search index=_internal component=Metrics group=per_sourcetype_thruput se... See more...
It looks like there's a typo in the hostname in the query.  Try host=*.  You can confirm a sourcetype was received using this search index=_internal component=Metrics group=per_sourcetype_thruput series="WinEventLog:Security" Just change the 'series' value to the sourcetype you're looking for.
Firstly, if this "works", it must be by mistake. LINE_BREAKER must contain a capturing groups to find the breaker. Secondly, don't use SHOULD_LINEMERGE=true. Unless you know why you shouldn't do it.... See more...
Firstly, if this "works", it must be by mistake. LINE_BREAKER must contain a capturing groups to find the breaker. Secondly, don't use SHOULD_LINEMERGE=true. Unless you know why you shouldn't do it. Thirdly, TIME_PREFIX should as closely match the prefix as possible so Splunk doesn't have to guess. Fourthly, TRANSFORMS defines index-time extractions. You could try to approach it with  line breaker similar to yours and then trimming it with SEDCMD but it is a bad idea as a whole. Don't process structured data this way. Are you absolutely sure that your json structures will _always_ be rendered starting with this field? And they will always end with that another field? If so, then why are you using structured data? Process your data with external tool before ingesting and split it properly using json-based logic, not plain regexes.
Hi Rich,   I am starting from scratch here and am not a Splunk whisperer, so really starting from ground zero. 
Thank you for the help. This got me to the following: I am hoping to get to the point where the individual fields like "name" and "consumptionCounter" become their own fields so that I can do th... See more...
Thank you for the help. This got me to the following: I am hoping to get to the point where the individual fields like "name" and "consumptionCounter" become their own fields so that I can do things like trend over time, average, etc.  
You could change the name of the script so that the browser sees it as a different file.
You can just set up a cluster with SF=RF=1 (mind you, that will not give you any redundancy) and have CM rebalance the buckets. Hidden bonus - you don't have to manually track configs across indexers.
The first sreenshot is about UF's internal logs in Splunk. The second screenshot is my search string looking for winevent. I also wrote down my inputs.conf. I do apologize that I have little knowledg... See more...
The first sreenshot is about UF's internal logs in Splunk. The second screenshot is my search string looking for winevent. I also wrote down my inputs.conf. I do apologize that I have little knowledge about this all. If I need to send more info or the right one please let me know, thanks!   inputs.conf= [WinEventLog://Security] disabled = 0 index = main sourcetype = WinEventLog:Security evt_resolve_ad_obj = 1 checkpointInterval = 5 @richgalloway   
I am able to parse timestamp and line break at "activity_type" using below settings, however facing challenge in removing the first lines and last lines and also i am not able to extract field/values... See more...
I am able to parse timestamp and line break at "activity_type" using below settings, however facing challenge in removing the first lines and last lines and also i am not able to extract field/values i used TRANSFORMS still didn't work. First lines: { "status": 0, "message": "Request completed successfully", "data": [ { last lines: "count": 33830, "meta_info": { "total_rows": 33830, "row_count": 200, "pagination": { "pagination_id": "" } } } Current props.conf and transforms.conf Props. [sample_test] BREAK_ONLY_BEFORE = \"activity_type":\s.+, DATETIME_CONFIG = LINE_BREAKER = \"activity_type":\s.+, MAX_TIMESTAMP_LOOKAHEAD = 16 NO_BINARY_CHECK = true TIME_FORMAT = %Y-%m-%dT%H:%M:%S TIME_PREFIX = event_time TZ = Europe/Istanbul category = Custom disabled = false pulldown_type = true TRANSFORMS-extraction = extract_field_value BREAK_ONLY_BEFORE_DATE = SHOULD_LINEMERGE = true Transforms. [extract_field_value] REGEX = "([^"]+)":\s*"([^"]+)" FORMAT = $1::$2  
Are you able to see the UF's internal logs in Splunk?  If not, then that problem must be resolved first. Please share the WinEventLog inputs.conf stanza(s). Please also tell how you are trying to s... See more...
Are you able to see the UF's internal logs in Splunk?  If not, then that problem must be resolved first. Please share the WinEventLog inputs.conf stanza(s). Please also tell how you are trying to search for the events.
Hi, I’m quite new to splunk when it comes to sending data to splunk. I do have experience with making dashboards etc. I’ve got a problem receiving data from a windows pc. I’ve installed the universal... See more...
Hi, I’m quite new to splunk when it comes to sending data to splunk. I do have experience with making dashboards etc. I’ve got a problem receiving data from a windows pc. I’ve installed the universal forwarder on there and I’ve got another windows pc that acts as my enterprise environment. I do know that the forwarder is active and can see a connection. I want to send wineventlog data to splunk. I’ve made a input.conf and output.conf containing information for what I want to forward. But when I want to look it up in the search I have 0 events. I’m sure I’m doing some things wrong haha. I would like some help with it. Thanks! 
Thanks For the reply, But I want whenever I make changes on the script and restart splunk it should show on the dashboard . Rather than everytime cleaning the cache to see the changes. Is it possible... See more...
Thanks For the reply, But I want whenever I make changes on the script and restart splunk it should show on the dashboard . Rather than everytime cleaning the cache to see the changes. Is it possible to do that?
Hello @Ryan.Paredez  It seems that the solution I posted does not apply to most cases. I faced the same issue twice and the solution was to allow some policies on F5 loadbalancer related to CORS er... See more...
Hello @Ryan.Paredez  It seems that the solution I posted does not apply to most cases. I faced the same issue twice and the solution was to allow some policies on F5 loadbalancer related to CORS error. It worked on one case, but I now have about 3 cases one of them F5 could not resolve it until now I will update the post once I resolve it. Regards, Khalid
This : <condition match="$row.Services$ == &quot;s3-bucket&quot;"> works fine
hello all, I see that SOAR sends a mail every time  a Container re-assigned takes place. I wish to disable SOAR from sending that email, but under Administration -> Email Settings I only manage to ... See more...
hello all, I see that SOAR sends a mail every time  a Container re-assigned takes place. I wish to disable SOAR from sending that email, but under Administration -> Email Settings I only manage to change the template of the email.  is there a way to stop it?   thank you in advance  
Hi @lukasmecir , I'm not sure that copying a part of buckets, an index will continue to correctly run: in theory it should do, but I'd prefer the approach I described. Also because, in this way, yo... See more...
Hi @lukasmecir , I'm not sure that copying a part of buckets, an index will continue to correctly run: in theory it should do, but I'd prefer the approach I described. Also because, in this way, you separate data to have more or less the same amunt of data in both the Indexers, and then the new data will be distributed between them. You could try, waiting to delete copied buckets from the first Indexer after a test completion. Ciao. Giuseppe