All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Rich,   I am starting from scratch here and am not a Splunk whisperer, so really starting from ground zero. 
Thank you for the help. This got me to the following: I am hoping to get to the point where the individual fields like "name" and "consumptionCounter" become their own fields so that I can do th... See more...
Thank you for the help. This got me to the following: I am hoping to get to the point where the individual fields like "name" and "consumptionCounter" become their own fields so that I can do things like trend over time, average, etc.  
You could change the name of the script so that the browser sees it as a different file.
You can just set up a cluster with SF=RF=1 (mind you, that will not give you any redundancy) and have CM rebalance the buckets. Hidden bonus - you don't have to manually track configs across indexers.
The first sreenshot is about UF's internal logs in Splunk. The second screenshot is my search string looking for winevent. I also wrote down my inputs.conf. I do apologize that I have little knowledg... See more...
The first sreenshot is about UF's internal logs in Splunk. The second screenshot is my search string looking for winevent. I also wrote down my inputs.conf. I do apologize that I have little knowledge about this all. If I need to send more info or the right one please let me know, thanks!   inputs.conf= [WinEventLog://Security] disabled = 0 index = main sourcetype = WinEventLog:Security evt_resolve_ad_obj = 1 checkpointInterval = 5 @richgalloway   
I am able to parse timestamp and line break at "activity_type" using below settings, however facing challenge in removing the first lines and last lines and also i am not able to extract field/values... See more...
I am able to parse timestamp and line break at "activity_type" using below settings, however facing challenge in removing the first lines and last lines and also i am not able to extract field/values i used TRANSFORMS still didn't work. First lines: { "status": 0, "message": "Request completed successfully", "data": [ { last lines: "count": 33830, "meta_info": { "total_rows": 33830, "row_count": 200, "pagination": { "pagination_id": "" } } } Current props.conf and transforms.conf Props. [sample_test] BREAK_ONLY_BEFORE = \"activity_type":\s.+, DATETIME_CONFIG = LINE_BREAKER = \"activity_type":\s.+, MAX_TIMESTAMP_LOOKAHEAD = 16 NO_BINARY_CHECK = true TIME_FORMAT = %Y-%m-%dT%H:%M:%S TIME_PREFIX = event_time TZ = Europe/Istanbul category = Custom disabled = false pulldown_type = true TRANSFORMS-extraction = extract_field_value BREAK_ONLY_BEFORE_DATE = SHOULD_LINEMERGE = true Transforms. [extract_field_value] REGEX = "([^"]+)":\s*"([^"]+)" FORMAT = $1::$2  
Are you able to see the UF's internal logs in Splunk?  If not, then that problem must be resolved first. Please share the WinEventLog inputs.conf stanza(s). Please also tell how you are trying to s... See more...
Are you able to see the UF's internal logs in Splunk?  If not, then that problem must be resolved first. Please share the WinEventLog inputs.conf stanza(s). Please also tell how you are trying to search for the events.
Hi, I’m quite new to splunk when it comes to sending data to splunk. I do have experience with making dashboards etc. I’ve got a problem receiving data from a windows pc. I’ve installed the universal... See more...
Hi, I’m quite new to splunk when it comes to sending data to splunk. I do have experience with making dashboards etc. I’ve got a problem receiving data from a windows pc. I’ve installed the universal forwarder on there and I’ve got another windows pc that acts as my enterprise environment. I do know that the forwarder is active and can see a connection. I want to send wineventlog data to splunk. I’ve made a input.conf and output.conf containing information for what I want to forward. But when I want to look it up in the search I have 0 events. I’m sure I’m doing some things wrong haha. I would like some help with it. Thanks! 
Thanks For the reply, But I want whenever I make changes on the script and restart splunk it should show on the dashboard . Rather than everytime cleaning the cache to see the changes. Is it possible... See more...
Thanks For the reply, But I want whenever I make changes on the script and restart splunk it should show on the dashboard . Rather than everytime cleaning the cache to see the changes. Is it possible to do that?
Hello @Ryan.Paredez  It seems that the solution I posted does not apply to most cases. I faced the same issue twice and the solution was to allow some policies on F5 loadbalancer related to CORS er... See more...
Hello @Ryan.Paredez  It seems that the solution I posted does not apply to most cases. I faced the same issue twice and the solution was to allow some policies on F5 loadbalancer related to CORS error. It worked on one case, but I now have about 3 cases one of them F5 could not resolve it until now I will update the post once I resolve it. Regards, Khalid
This : <condition match="$row.Services$ == &quot;s3-bucket&quot;"> works fine
hello all, I see that SOAR sends a mail every time  a Container re-assigned takes place. I wish to disable SOAR from sending that email, but under Administration -> Email Settings I only manage to ... See more...
hello all, I see that SOAR sends a mail every time  a Container re-assigned takes place. I wish to disable SOAR from sending that email, but under Administration -> Email Settings I only manage to change the template of the email.  is there a way to stop it?   thank you in advance  
Hi @lukasmecir , I'm not sure that copying a part of buckets, an index will continue to correctly run: in theory it should do, but I'd prefer the approach I described. Also because, in this way, yo... See more...
Hi @lukasmecir , I'm not sure that copying a part of buckets, an index will continue to correctly run: in theory it should do, but I'd prefer the approach I described. Also because, in this way, you separate data to have more or less the same amunt of data in both the Indexers, and then the new data will be distributed between them. You could try, waiting to delete copied buckets from the first Indexer after a test completion. Ciao. Giuseppe
I don't think you can do this at ingest time (but happy to be proved wrong!), but you can parse and split out the elements of the data collection into separate events using spath and mvexpand | spat... See more...
I don't think you can do this at ingest time (but happy to be proved wrong!), but you can parse and split out the elements of the data collection into separate events using spath and mvexpand | spath data{} output=data | mvexpand data | table data
Hi, thank you for reply. I understand what you mean, but my intended goal is not only to have the same amount of data on both instances, but have the same amount of data on both instances per index ... See more...
Hi, thank you for reply. I understand what you mean, but my intended goal is not only to have the same amount of data on both instances, but have the same amount of data on both instances per index - to share load between instances when data in one index will be searched. I see I did not mention this in original post, I am sorry for that, my fault.
Apart from what @gcusello says, rex will only extract contiguous characters into a field, so what you are asking for is not possible in a single rex command.
Hi @lukasmecir , don't copy part of buckets, but divide your indexes between the two Indexers to have more or less the same amount od data, moving the entire Indexes, not part of them. Remeber to a... See more...
Hi @lukasmecir , don't copy part of buckets, but divide your indexes between the two Indexers to have more or less the same amount od data, moving the entire Indexes, not part of them. Remeber to add the same indexes.conf to the New Indexer. Ciao. Giuseppe
For good reasons, that is the way browsers work!
Hello everyone,  need your support to parse below sample json, i want is  1. Only the fields from "activity_type" till "user_email" 2. Remove first lines before "activity_type"and last lines after... See more...
Hello everyone,  need your support to parse below sample json, i want is  1. Only the fields from "activity_type" till "user_email" 2. Remove first lines before "activity_type"and last lines after "user_email" 3. Line should break at "activity_type" 4. TIME_PREFIX=event_time   i added below but doesn't work removing the lines and TIME_PREFIX  [ sample_json ] BREAK_ONLY_BEFORE=\"activity_type":\s.+, CHARSET=UTF-8 SHOULD_LINEMERGE=true disabled=false LINE_BREAKER=([\r\n]+) TIME_PREFIX=event_time SEDCMD-remove=s/^\{/g   Sample data: { "status": 0, "message": "Request completed successfully", "data": [ { "activity_type": "login", "associated_items": null, "changed_values": null, "event_time": 1733907370512, "id": "XcDutJMBNXQ_Xwfn2wgV", "ip_address": "x.x.x.x", "is_impersonated_user": false, "item": { "user_email": "xyz@example.com" }, "message": "User xyz@example.com logged in", "object_id": 0, "object_name": "", "object_type": "session", "source": "", "user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.1.1 Safari/605.1.15", "user_email": "xyz@example.com" }, { "activity_type": "export", "associated_items": null, "changed_values": null, "event_time": 1732634960475, "id": "bd0XaZMBH5U9RA7biWrq", "ip_address": "", "is_impersonated_user": false, "item": null, "message": "Incident Detail Report generated successfully", "object_id": 0, "object_name": "", "object_type": "breach incident", "source": "", "user_agent": "", "user_email": "" }, { "activity_type": "logout", "associated_items": null, "changed_values": null, "event_time": 1732625563087, "id": "jaGHaJMB-qVJqBPy_3IG", "ip_address": "87.200.106.98", "is_impersonated_user": false, "item": { "user_email": "xyz@example.com" }, "message": "User xyz@example.com logged out", "object_id": 0, "object_name": "", "object_type": "session", "source": "", "user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36", "user_email": "xyz@example.com" } ], "count": 33830, "meta_info": { "total_rows": 33830, "row_count": 200, "pagination": { "pagination_id": "" } } }
Hello, I have all-in-one Splunk instance with data already indexed. Now I want to add new Indexer (not-clustered, clean installation). I would like to move part of indexed data to new Indexer (to ha... See more...
Hello, I have all-in-one Splunk instance with data already indexed. Now I want to add new Indexer (not-clustered, clean installation). I would like to move part of indexed data to new Indexer (to have cca the same amount of data on both instances). My idea of process is: Stop all-in-one instance  Create new index(es) on new indexer Stop new indexer Copy (what is best - rsync?) part of buckets in given index(es) from all-in-one instance to new indexer Start new indexer and all-in-one instance Configure outputs.conf on forwarders - add new indexer Add new indexer as search peer to all-in-one instance Would it work or I missed something? Thank you for help. Best regards Lukas Mecir