All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @livehybrid  The windbag command worked just fine, but the collect command did not work. How do I use collect command in the Splunk report that appended |summaryindex automatically? Perhaps... See more...
Hi @livehybrid  The windbag command worked just fine, but the collect command did not work. How do I use collect command in the Splunk report that appended |summaryindex automatically? Perhaps screenshot below will explain better. Thank you for your help I have a Splunk report that generates summary index daily The search query will be index=summary report=json_test When the report run daily, the search will be appended with "| summary index" command below: | windbag | head 1 | eval _raw="{\"name\":\"John Doe\",\"age\":30,\"address\":{\"street\":\"123 Main St\",\"city\":\"Anytown\",\"state\":\"CA\",\"zip\":\"12345\"},\"interests\":[\"reading\",\"hiking\",\"coding\"]}" | summaryindex spool=t uselb=t addtime=t index="summary" file="RMD[random characters].stash_new" name="json_test" marker="hostname=\"https://aa.test.com/\",report=\"json_test\"    
Hi @LearningGuy  Yes you can use output_mode=hec - see below: | windbag | head 1 | eval _raw="{\"name\":\"John Doe\",\"age\":30,\"address\":{\"street\":\"123 Main St\",\"city\":\"Anytown\",\"stat... See more...
Hi @LearningGuy  Yes you can use output_mode=hec - see below: | windbag | head 1 | eval _raw="{\"name\":\"John Doe\",\"age\":30,\"address\":{\"street\":\"123 Main St\",\"city\":\"Anytown\",\"state\":\"CA\",\"zip\":\"12345\"},\"interests\":[\"reading\",\"hiking\",\"coding\"]}" | eval source="answersDemo" | collect index=main output_format=hec Then when I search index=main source=answersDemo: Note - you need to ensure you have the run_collect capability for your role and also access to the index you are collecting in to.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @LearningGuy  Ah yes you do need access to the index you search but it can be any index.  You might actually be able to use the "windbag" command instead like this: | windbag | head 1 | eval _r... See more...
Hi @LearningGuy  Ah yes you do need access to the index you search but it can be any index.  You might actually be able to use the "windbag" command instead like this: | windbag | head 1 | eval _raw="{\"name\":\"John Doe\",\"age\":30,\"address\":{\"street\":\"123 Main St\",\"city\":\"Anytown\",\"state\":\"CA\",\"zip\":\"12345\"},\"interests\":[\"reading\",\"hiking\",\"coding\"]}"    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @ITWhisperer  Will a JSON format with a tree structure be supported if I create a summary index using a Splunk report? The Splunk report automatically generated  summary index using the "summa... See more...
Hi @ITWhisperer  Will a JSON format with a tree structure be supported if I create a summary index using a Splunk report? The Splunk report automatically generated  summary index using the "summaryindex" command , rather than  the "collect" command.  According to the documentation you sent, using output_format=hec to get JSON-formatted output. Thank you
Not being admin, you might not have access to _internal which is why you get no events which you can override the _raw field. So, yes, try using one of the indexes you do have access to (with a corre... See more...
Not being admin, you might not have access to _internal which is why you get no events which you can override the _raw field. So, yes, try using one of the indexes you do have access to (with a corresponding timeframe so that you find at least 1 event). Assuming you have access/permissions, you can add to a summary index with the collect command. https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Collect
I've done this little app in order to adress this specific use case : https://github.com/kilanmundera/Custom-Annotations-Framework-for-Splunk-Enterprise-Security
Hello @livehybrid  If I literally used your query, I got no result, but if I changed the index name to one of my existing indexes, I got the same output. 1. Should I use one of my existing indexe... See more...
Hello @livehybrid  If I literally used your query, I got no result, but if I changed the index name to one of my existing indexes, I got the same output. 1. Should I use one of my existing indexes for testing?  (As I am not an admin, I don't have the ability to import JSON and create an index) 2. How do I create a summary index in JSON format with a tree structure? Thank you so much for your help  
Hi all, I’m planning to deploy the Splunk Attack Range in a cloud-based lab environment, likely in AWS or Azure. I need to provide my team with clear guidance on the resource requirements for provis... See more...
Hi all, I’m planning to deploy the Splunk Attack Range in a cloud-based lab environment, likely in AWS or Azure. I need to provide my team with clear guidance on the resource requirements for provisioning multiple virtual machines or instances as part of the full deployment. From the documentation I see the Attack Range includes: Splunk Enterprise Server,  Splunk SOAR, Windows Domain Controller, Windows Server, Windows Workstation, Kali Linux, Nginx server, a general-purpose Linux server, Zeek server, and Snort server (IDS). I’m looking for recommendations on the following: Compute — vCPU and RAM requirements for each component when deployed on separate VMs. What instance types have worked well in AWS or Azure? Storage — Minimum and recommended disk space per instance. Are SSD-backed volumes necessary for performance? What IOPS or throughput is required for log-heavy components like Splunk or Zeek? Deployment tips — Has anyone successfully deployed this in AWS or Azure? Any suggestions on instance sizing, storage configuration, or common bottlenecks when running all components concurrently? Appreciate any best practices or real-world guidance you can share to help with efficient provisioning. Thanks in advance!  
Does Splunk integrate with WebEx Calling (not WebEx Meetings or WebEx Contact Center) for CDR reporting, similar to how it integrates with CUCM?
Hi all, I'm trying to dynamically replace single backslashes with double backslashes in a search string and use the result to search across a field (e.g., FileSource). Here's what I’ve tried: |... See more...
Hi all, I'm trying to dynamically replace single backslashes with double backslashes in a search string and use the result to search across a field (e.g., FileSource). Here's what I’ve tried: | eval text_search="*\\Test\abc\test\abc\xxx\OUT\*" | eval text_search_escaped=replace(text_search, "\\\\", "\\\\\\\\") | search FileSource=text_search_escaped The output of text_search_escaped looks correct (with double backslashes), and if I run a manual search like below, I do get results: index=... FileSource="*\\Test\\abc\\test\\abc\\xxx\\OUT\\*" However, when I try to use the text_search_escaped variable inside search, I get no results. Am I missing something in how Splunk treats dynamic fields inside search? Is there a better way to pass an escaped Windows-style path to a search clause?
@jkat54 Hello, I find out I get lastTime tstats metadata from export api endpoint ran from CLI and not getting this lastTime field on web search with same query, even if lastTime info is from last ye... See more...
@jkat54 Hello, I find out I get lastTime tstats metadata from export api endpoint ran from CLI and not getting this lastTime field on web search with same query, even if lastTime info is from last year from offline UF. I guess there is maybe web filtering. This only applies to single result though. Results can be different I guess due to different user/role, app context, api endpoint which may be my case. Thanks.
In splunk _raw is only one line, but it can contains e.g. \n character. You could see it e.g. “table _raw”
Hi @LearningGuy  When using makeresults which is a report-generating command you get a table output. When I want to get a JSON tree view you need it to be an eventbased output, I use this little tr... See more...
Hi @LearningGuy  When using makeresults which is a report-generating command you get a table output. When I want to get a JSON tree view you need it to be an eventbased output, I use this little tricky to get an event and then override with eval _raw like this: index=_internal | head 1 | eval _raw="{\"name\":\"John Doe\",\"age\":30,\"address\":{\"street\":\"123 Main St\",\"city\":\"Anytown\",\"state\":\"CA\",\"zip\":\"12345\"},\"interests\":[\"reading\",\"hiking\",\"coding\"]}"    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, How to create sample JSON data and display it in tree structure? I used makeresults to create sample JSON data below | makeresults | eval data = "{\"name\":\"John Doe\",\"age\":30,\"addres... See more...
Hello, How to create sample JSON data and display it in tree structure? I used makeresults to create sample JSON data below | makeresults | eval data = "{\"name\":\"John Doe\",\"age\":30,\"address\":{\"street\":\"123 Main St\",\"city\":\"Anytown\",\"state\":\"CA\",\"zip\":\"12345\"},\"interests\":[\"reading\",\"hiking\",\"coding\"]}" The search result is below. My expected output is below. I have the option to select "list" from the drop down, but this option is only available if I import the data to an index.  Please help. Thanks   JSON data: { "name": "John Doe", "age": 30, "address": { "street": "123 Main St", "city": "Anytown", "state": "CA", "zip": "12345" }, "interests": [ "reading", "hiking", "coding" ] }  
Thank you, @livehybrid, @richgalloway, I'll get screenshots but, a related question, how do I access the second line of _raw?
This seems to be working great, over 24 hours I get a few quirks but I can live with it. Thank you.
Hi @ranandeshi  I've posted an updated SPL directly on the question, but you can make this a single EVAL with: | eval formatted_time = strftime((tonumber(substr(identifier,2,16),16) - tonumber("400... See more...
Hi @ranandeshi  I've posted an updated SPL directly on the question, but you can make this a single EVAL with: | eval formatted_time = strftime((tonumber(substr(identifier,2,16),16) - tonumber("4000000000000000",16) + tonumber(substr(identifier,18,8),16) / 1000000000), "%Y-%m-%d %H:%M:%S") . "." . printf("%03d", round(tonumber(substr(identifier,18,8),16) / 1000000, 0)) This means you could possible use INGEST_EVAL to overwrite the _time field: == props.conf == [yourSourcetype] TRANSFORMS-taiTime = taiTimeExtract == transforms.conf == [taiTimeExtract] INGEST_EVAL = _time:=strftime((tonumber(substr(identifier,2,16),16) - tonumber("4000000000000000",16) + tonumber(substr(identifier,18,8),16) / 1000000000), "%Y-%m-%d %H:%M:%S") . "." . printf("%03d", round(tonumber(substr(identifier,18,8),16) / 1000000, 0))  However this assumes "identifier" is a field it can eval against. You might need to extract this first. Do you have a sample event I can work on to help or is this enough to get you started?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I've been writing new pipelines to my Edge Processors when I discovered that no destination values are showing up for me to select. We only have two, our default destination of our cloud instance and... See more...
I've been writing new pipelines to my Edge Processors when I discovered that no destination values are showing up for me to select. We only have two, our default destination of our cloud instance and an additional cloud instance. When I go to create a new pipeline or modify an old one that already is configured to go to the default destination, it doesn't show up. The ones already created are still working and sending data in. Any idea what could be causing this to happen? This occurred very recently. Prior to this I was able to create Pipelines and add a destination. Even a work-around would be appreciated. Thanks!
Hi To accurately convert TAI64N to a human-readable timestamp in Splunk, you need to: Subtract the TAI64 epoch offset (0x4000000000000000) from the first 16 hex digits (seconds) Add the nanoseco... See more...
Hi To accurately convert TAI64N to a human-readable timestamp in Splunk, you need to: Subtract the TAI64 epoch offset (0x4000000000000000) from the first 16 hex digits (seconds) Add the nanoseconds (next 8 hex digits) as a fractional part Format the result using strftime and printf Here's the corrected SPL: | makeresults | eval identifier="@4000000068022d4b072a211c" | eval tai64n_hex = substr(identifier, 2) | eval tai64_seconds = tonumber(substr(tai64n_hex, 1, 16), 16) - tonumber("4000000000000000", 16) | eval tai64_nanoseconds = tonumber(substr(tai64n_hex, 17, 8), 16) | eval tai64_epoch = tai64_seconds + (tai64_nanoseconds / 1000000000) | eval formatted_time = strftime(tai64_epoch, "%Y-%m-%d %H:%M:%S") . "." . printf("%03d", round((tai64_nanoseconds/1000000),0)) | table formatted_time tai64_seconds extracts and normalises the seconds since Unix epoch. tai64_nanoseconds extracts the nanoseconds. tai64_epoch combines seconds and fractional seconds. strftime formats the timestamp, and printf ensures milliseconds are zero-padded. Note: TAI64N timestamps are based on TAI, not UTC. TAI is ahead of UTC by a number of leap seconds (currently 37). Splunk and most systems use UTC, so your converted time may be offset by this difference. If you need exact UTC, subtract the current TAI-UTC offset (e.g., 37 seconds) from tai64_epoch Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
okay, thank you for your reply, Is it possible to parse TAI64N timestamp while indexing, if so, How can we do it?