All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @SanjayReddy , Thank you for the suggestion, since I already have 6 years of  IT experience..but really wants to work in splunk admin role..would power user role  certification  is fine or the ce... See more...
Hi @SanjayReddy , Thank you for the suggestion, since I already have 6 years of  IT experience..but really wants to work in splunk admin role..would power user role  certification  is fine or the certified admin splunk. I had a  splunk power user certification already completed from Udemy.   Would appreciate for the viewpoint on it. Regards,  Kanchan.
10 mb
10 mbs
Hi @gcusello  If I want to use Heavy Forwarder  to forward received Syslog logs to a target server that does not have Splunk instance, can you give me some advice?Thankyou!
Yes but percentage is counted from a value against some bigger total. What would be the total in your case?
Percentage of bandwidth utilized. I’m thinking bytes in+bytes out/1024 then I’m stumped from there
There are two ways about it. One is the map command as shown by @yuanliu . Another one is using subsearch. The subsearch has its limitations and can be silently finalized early producing incomplete ... See more...
There are two ways about it. One is the map command as shown by @yuanliu . Another one is using subsearch. The subsearch has its limitations and can be silently finalized early producing incomplete results. But the map command is one of the risky commands and a normal user can be forbidden from running it.
It is rare that I, or anyone here, recommends map command but this seems to be an appropriate use case if errors are rare and far in between. index=project1 sourcetype=pc1 log_data="*error*" | eval ... See more...
It is rare that I, or anyone here, recommends map command but this seems to be an appropriate use case if errors are rare and far in between. index=project1 sourcetype=pc1 log_data="*error*" | eval early = _time - 60, late = _time + 60 | map search="search index=project1 sourcetype=pc1 earliest=$early$ latest=$late$"  
Yes. I understand. The question is how your data is being ingested. You said that you use a custom app querying an API endpoint. I assume therefore that said app has some modular input which produce... See more...
Yes. I understand. The question is how your data is being ingested. You said that you use a custom app querying an API endpoint. I assume therefore that said app has some modular input which produces data for the forwarder. But said data can be streamed to Splunk process in three ways. https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/modinputsscript#Stream-event-data If your data is streamed as XML and is being incorrectly (not) split into separate events by the modular input since it bypasses the line breaking part of ingestion pipeline completely your LINE_BREAKER settings don't matter. Of course this is based on my assumption from what little you wrote about your custom ingestion method.
I have a stream of logs from a system. To filter for errors, I can perform a search like so: index=project1 sourcetype=pc1 log_data="*error*" I can use it to get errors however I also want the... See more...
I have a stream of logs from a system. To filter for errors, I can perform a search like so: index=project1 sourcetype=pc1 log_data="*error*" I can use it to get errors however I also want the events surrounding this error as well. I want to be able to get all events that occurred 1 minute before and 1 minute after (all events, not just errors).  What would be the best possible way to achieve this?
Hi, Team   i have try to integrated with thousandeyes, from TE part i confirmed the connection with Splunk is normal status, but from splunk perspective , i couldn't confirm the chart and realtime ... See more...
Hi, Team   i have try to integrated with thousandeyes, from TE part i confirmed the connection with Splunk is normal status, but from splunk perspective , i couldn't confirm the chart and realtime information, so i would like to know how i could confirm the integration if successfully or not,  and the realtime information from splunk side. thanks. 
To create a new endpoint named get_ticket_id in your Django application, follow these steps: Steps: Define a function in your views to handle the logic of accepting two strings, calling the desir... See more...
To create a new endpoint named get_ticket_id in your Django application, follow these steps: Steps: Define a function in your views to handle the logic of accepting two strings, calling the desired function, and returning the result. Create a URL route to point to the new view function. Implement the logic for the function you want to call. Example Code: views.py from django.http import JsonResponse from django.views.decorators.csrf import csrf_exempt import json # Sample function to process the two strings def process_strings(string1, string2): # Example logic to generate a ticket ID return f"Ticket-{string1[:3]}-{string2[:3]}" @csrf_exempt def get_ticket_id(request): if request.method == "POST": try: # Parse the request body data = json.loads(request.body) string1 = data.get("string1") string2 = data.get("string2") if not string1 or not string2: return JsonResponse({"error": "Both 'string1' and 'string2' are required."}, status=400) # Call the processing function ticket_id = process_strings(string1, string2) return JsonResponse({"ticket_id": ticket_id}, status=200) except json.JSONDecodeError: return JsonResponse({"error": "Invalid JSON format."}, status=400) return JsonResponse({"error": "Only POST requests are allowed."}, status=405)  
As mentioned above the events are coming in as one big blob not broken into separate events based on the line breaker above.   
There were already some valid points made in this thread but it's worth noting that it's not that simple. 1. Data is rolled over bucket life cycle so - depending on your index ingestion rate and set... See more...
There were already some valid points made in this thread but it's worth noting that it's not that simple. 1. Data is rolled over bucket life cycle so - depending on your index ingestion rate and settings - your index space or retention period may allow you to hold only some recent portion of data. You might have already discarded older data. 2. Data is kept in buckets and there is not much sense in trying to go below "resolution" of a bucket. If a bucket contains data from a month ago till a week ago it's impossible to tell how much is used by data from some specific three-day-long period from that time range. 3. Data in distributed environment is rolled separately on each indexer so the "free space" can vary per indexer. 4. There is a lot of different settings responsible for possible index (parts) size. 5. Replication and search factor. 6. Smartstore.  
1. How do you define percentage? 2. You're overthinking your search with this append thingy. You should search for both Names and streamstats by Name. Append uses subsearch and it has its limitation... See more...
1. How do you define percentage? 2. You're overthinking your search with this append thingy. You should search for both Names and streamstats by Name. Append uses subsearch and it has its limitations so you might be getting wrong results and not even knowing it.
Hi @ramuzzini  How about this for a different approach?  | eval json_data = "{" . replace(_raw, "(?<=,|^)([^=]+)=([^,]+)", "\"\\1\":\"\\2\"") . "}" | spath input=json_data | table Loc, Comp, User... See more...
Hi @ramuzzini  How about this for a different approach?  | eval json_data = "{" . replace(_raw, "(?<=,|^)([^=]+)=([^,]+)", "\"\\1\":\"\\2\"") . "}" | spath input=json_data | table Loc, Comp, User, Date Here is a full working example: | makeresults count=3 | streamstats count | eval _raw=case(count=1, "Loc=Warehouse, Comp=WH-SOC01, User=username1, Date=2025-03-18", count=2, "Loc=Warehouse, Comp=WH-SOC02, User=username2, Date=2025-03-20", count=3, "Loc=Warehouse, Comp=WH-SOC03, User=username1, Date=2025-03-24") | fields _raw | eval json_data = "{" . replace(_raw, "(?<=,|^)([^=]+)=([^,]+)", "\"\\1\":\"\\2\"") . "}" | spath input=json_data | table Loc, Comp, User, Date Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
If your input sends broken events in xml mode they are not broken again.
Any luck after that with the error
Hi Community, I have a JSON data source that I am trying to get into Splunk via a heavy Forwarder using a custom built app that uses an API call. For some reason my LINE_BREAKER seems to be getting ... See more...
Hi Community, I have a JSON data source that I am trying to get into Splunk via a heavy Forwarder using a custom built app that uses an API call. For some reason my LINE_BREAKER seems to be getting ignored every line ends and starts as follows.  myemail@this-that-theother.co"},{"specialnumber":"number"  the line break is the comma between the open and close curly braces..... IOW ,{ this is the line I am using in my props.conf LINE_BREAKER = (\,)\{\" for some reason the data continues to come in, in one big blob of multiple events.  This is my props.conf KV_MODE = json SHOULD_LINEMERGE = 0 category = something pulldown_type = 1 TZ = UTC TIME_PREFIX=\"time\"\:\" MAX_TIMESTAMP_LOOKAHEAD = 20 TIME_FORMAT =%Y-%m-%dT%H:%M:SZ TRUNCATE = 999999 LINE_BREAKER = (\,)\{\" EVENT_BREAKER_ENABLE = false Time comes in as such "time":"2025-03-25T19:36:35Z" Am I missing something? 
This was it. Thank you for the assist.