All Topics

Top

All Topics

Can't hot bucket just roll directly to cold bucket? Or it's not possible? Does it have anything to do with the fact that the hot bucket is actively getting written to? Can anyone please shed some lig... See more...
Can't hot bucket just roll directly to cold bucket? Or it's not possible? Does it have anything to do with the fact that the hot bucket is actively getting written to? Can anyone please shed some light on this on a technical level as I'm not getting the answer I'm looking for from the documentations. Thanks in advance.
How do I generate reports and run stats on key=value from just message field . Ignoring rest of the fields.  {"cluster_id":"cluster", "message":"Excel someType=MY_TYPE totalItems=1 errors=ABC, X... See more...
How do I generate reports and run stats on key=value from just message field . Ignoring rest of the fields.  {"cluster_id":"cluster", "message":"Excel someType=MY_TYPE totalItems=1 errors=ABC, XYZ status=success","source":"some_data"}   Gone through multiple examples but could not find something concrete that will help me group by on  key someType, compute stats on totalItems, list top errors ABC, XYZ These don't have to be in the same query. I assume top errors grouping would be a separate query.
Our MySQL server was upgraded from 5.7 to 8.0.37, and the MariaDB plugin no longer supports exporting audit log files. Are there any methods to export audit logs in a Windows environment?
I want to receive Keycloak logs in the Splunk Cloud platform. I found Keycloak apps in Splunkbase, but they seem to be unavailable in Splunk Cloud. Are there any methods to receive Keycloak logs in S... See more...
I want to receive Keycloak logs in the Splunk Cloud platform. I found Keycloak apps in Splunkbase, but they seem to be unavailable in Splunk Cloud. Are there any methods to receive Keycloak logs in Splunk Cloud?
Hello, Is it possible to create HEC Token from the CLI  of Linux host? Any recommendations how to create HEC token from CLI would be greatly appreciated. Thank you! 
I've been asked to generate an uptime report for Splunk.  I don't see anything obvious in the monitoring console, so I thought I'd try to see if I could build a simple dashboard.  Does the monitoring... See more...
I've been asked to generate an uptime report for Splunk.  I don't see anything obvious in the monitoring console, so I thought I'd try to see if I could build a simple dashboard.  Does the monitoring console log things like 
Register here. Ask the experts at Community Office Hours! An ongoing series where technical Splunk experts answer questions and provide how-to guidance on various Splunk product and use case topics. ... See more...
Register here. Ask the experts at Community Office Hours! An ongoing series where technical Splunk experts answer questions and provide how-to guidance on various Splunk product and use case topics.   In this special session on "Splunk Search & New SPL Innovations", Splunk experts kick us off with a round-robin to showcase the latest innovations in search, such as the Splunk AI Assistant for SPL app, Federated Search for Amazon S3, and SPL2.   What can I ask in this AMA? How can I reduce my skipped searches? How do I translate my question into SPL? How can I optimize this search query so it runs faster? How do I set up federated search for Splunk? What are the advantages of using federated search for Amazon S3? How do I convert my SPL into SPL2? My search is not displaying properly, how do I fix it? How do I create an alert/visualization/dashboard from my search?   Please submit your questions at registration. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.   Look forward to connecting!
We’re excited to announce a powerful update to Splunk Data Management with added support for Amazon Data Firehose in Edge Processor! This enhancement enables you to use Amazon Data Firehose (formerly... See more...
We’re excited to announce a powerful update to Splunk Data Management with added support for Amazon Data Firehose in Edge Processor! This enhancement enables you to use Amazon Data Firehose (formerly Amazon Kinesis Data Firehose) as a data source, offering greater flexibility and efficiency in managing data streams. With integration across over 20 AWS services, you now can easily stream data into Splunk from sources like Amazon CloudWatch, SNS, AWS WAF, Network Firewall, IoT, and more. What’s new? Integration with Amazon Data Firehose With this update, Edge Processor can now directly ingest logs from Amazon Data Firehose, enabling seamless streaming from various AWS services into Splunk for real-time analysis and visualization. Whether monitoring cloud infrastructure, applications, or security events, this addition broadens your data source options, enhances your ability to gain real-time insights, and simplifies data pipeline management while both reducing latency and ensuring faster access to critical data. Acknowledgement for HEC data This release also introduces another crucial feature in Edge Processor: receiver acknowledgement for upstream HTTP Event Collector (HEC) data. This preserves data integrity by ensuring HEC events sent to the processor are properly received and acknowledged, adding an additional layer of confidence that no information is lost during transmission between data inputs and Edge Processors. Ingesting VPC flow logs into Edge Processor via Firehose streams In the following sections, we’ll guide you through how to integrate Amazon Data Firehose into your existing Splunk setup. Specifically, we’ll focus on setting up a HEC token for your Edge Processor, configuring VPC flow log ingestion into Splunk via Amazon Data Firehose, and achieving network traffic CIM compliance using SPL2 pipelines. An architectural diagram illustrating the high-level components involved in this setup can be seen below.  You can also view this step-by-step guide in Lantern. Note: The following steps assume you already have access to the following: an Edge Processor tenant with a paired EC stack, an Edge Processor instance running on a machine with an accessible URL, and an AWS account. Furthermore, to ensure proper data ingestion, your Edge Processors’ HEC receivers should accept data over TLS—not mTLS. This can be configured in your tenant’s web UI. Applying a HEC token to your Edge Processor HEC tokens are used by the HTTP Event Collector to authenticate and authorize data sent to Splunk. These tokens securely manage data intake from various sources over HTTP/HTTPS, ensuring that only authorized data is accepted and properly categorized for analysis. Fortunately, the process of generating and setting up a token for use within your Edge Processor is relatively straightforward: Open a web browser and navigate to your Splunk Cloud Platform instance. Then, using the dropdown menus located at the top of the page, select “Settings” > “Data Inputs”. In the table titled “Local inputs”, locate the “HTTP Event Collector” row and click the “+ Add New” button on the right-hand side. You should be directed to a form requesting various HEC-related information. The only required field is the token name, though you can fill in additional details to better suit your use-case. Once finished, review and submit the form using the navigation buttons in the top-right corner. Beneath the resulting “Token is being deployed” header should be an immutable text box labeled “Token Value”. Copy this value to your clipboard, as it’ll be needed shortly. Now that a valid HEC token has been generated, it’s time to apply it to your Edge Processor: Navigate to your Edge Processor tenant in a web browser. You can do so by visiting console.scs.splunk.com/<tenant-id> and logging in via your user- or company-provided SSO. On the left-hand side of the landing page, select “Edge Processors” > “Shared settings”. This will open a page used to configure various receiver settings. In the “Token authentication” section, click the “New token” button on the right-hand side, then paste the previously-copied HEC token value into the “HEC token” field. (Optional) Configuring the “Source” and “Source type” fields is strongly recommended here, as doing so assigns default values to incoming data lacking them. This is especially important because source/sourcetype are typically used as partition values in the SPL2 pipelines transforming data within an Edge Processor instance. We’ll be using default-source and default-sourcetype for demonstration purposes. However, more accurate values may consist of aws:kdf, aws:vpc-flow-log, etc. Once everything has been properly configured, click the “Save” button in the bottom-right corner of the page. Configuring VPC flow log ingestion into Splunk VPC flow logs capture essential information about the IP traffic to and from network interfaces in your Virtual Private Cloud. By streaming these logs through Amazon Data Firehose, you can efficiently route the data to Edge Processor for real-time processing and analysis, enabling deeper insights within your Splunk environment. To set this up, you’ll first need to create a Firehose stream: Navigate to your AWS Management Console. Use the search bar at the top of the page to locate the “Amazon Data Firehose” service’s homepage, then click the “Create Firehose stream” button in the top-right corner. For “Source” and “Destination”, select “Direct PUT” and “Splunk” from the input fields’ dropdown menus, respectively. This will populate the form with additional configuration settings. Within the “Destination Settings” panel, enter the URL of the machine hosting your Edge Processor instance in the “Splunk cluster endpoint” field. This URL should always follow the format https://<host_machine_url>:8088 and should point to your Edge Processor instance—not the tenant. Note: In order for this to work properly, your instance’s URL must use HTTPS, and the host machine should be configured to allow incoming HTTP/TCP traffic on the specified HEC receiver port (e.g., 8088). In the “Authentication token” field of this same panel, copy and paste the value of the HEC token generated previously. Finally, in the “Backup settings” panel, you must specify an S3 bucket to ensure data recovery in the event of transmission failures or other issues during the streaming process. If you do not already have an S3 bucket set up, follow the instructions provided here. Once finished, click the “Create Firehose stream” button in the bottom-right corner of the form. To test whether you’ve configured everything correctly before moving on, navigate to your newly-created Firehose stream and expand the panel titled “Test with demo data”. Upon clicking the “Start sending demo data” button, dummy data should be routed from your Firehose stream through your Edge Processor instance. To verify this is working as expected, select the “Edge Processors” tab on the left-hand side of your tenant’s UI and double-click the row containing your Edge Processor. Within a minute or two, the “Data flowing through in the last 30 minutes” metrics in the bottom-right corner of the page should reflect some small amount of inbound data—likely categorized by the default source and sourcetype values specified previously. If this isn’t the case, be sure to check your Firehose stream’s destination error logs in Amazon CloudWatch. With the Firehose stream now configured to send data to your Edge Processor instance, the final step is to create a VPC log flow and direct it to the Firehose stream: In the same AWS management console as before, navigate to the “VPC” service’s homepage using the search bar provided at the top of the page. Depending on your use-case, you may want to create a new VPC or use an existing one. Instructions for creating a new one can be found in the official AWS documentation. For the purposes of this demonstration, we’ll be using the default VPC provided by AWS. Click the “VPCs” hyperlink in the “Resources by region” section of the page and select its associated “VPC ID”. This value will be of the format “vpc-<hexstring>”. The resulting page will display information related to the selected VPC’s configuration. Beneath the section titled “Details”, open the “Flow logs” tab and click the “Create flow log” button on the right-hand side of the panel. For the “Destination” field, choose the “Send to Amazon Data Firehose in the same account” option. Then, select your previously-created Firehose stream from the dropdown menu of the resulting “Amazon Firehose stream name” field. For the “Log record format” field, you can choose to use AWS’s default format or customize your own. Which fields are included is ultimately decided by your use-case; however, it’s important to note the format preview displayed below. This will come in handy when creating a SPL2 pipeline used to transform these logs in the following step. Once finished, click “Create flow log” in the bottom-right corner of the form. At this point, you should begin to see VPC flow logs populating the destination specified by your Edge Processor. If routing to Splunk Cloud Platform, you can identify these logs by searching for the default source and sourcetype values defined previously. Again, in the event something has gone wrong, checking the Firehose stream’s destination error logs is a great starting point for debugging. Achieving CIM compliance using SPL2 pipelines With VPC flow logs now successfully ingested into Edge Processor, the next step is to transform these logs to align with the CIM Network Traffic data model. By leveraging specific SPL2 commands, we can build and apply a pipeline that maps the flow log fields to their CIM equivalents. This will ensure the data is normalized, enabling consistent and effective analysis across Splunk’s search and reporting capabilities. To accomplish this, we must first create a SPL2 pipeline: Navigate to your Edge Processor tenant in a web browser. On the left-hand side of the page, select the “Pipelines” tab and click the “+ New pipeline” button in the top-right corner. You will be prompted to select a template from which your pipeline will be created. Despite the abundance of options to choose from, select “Blank pipeline” and click “Next”. Next, define the partition(s) for your pipeline. Because we configured our receiver to append default source and sourcetype values to logs without them, it’s best to set the partition to match one or both of these values since VPC flow logs don’t include them by default. (Optional) In the “Enter or upload sample data” input box, it may be useful for testing purposes to paste one of the VPC flow logs ingested earlier. Depending on whether sample data is provided, click either the “Next” or “Skip” button in the bottom-right corner of the page to continue. For example, the AWS default format will produce a log similar to the following:  {"message":"2 215263928837 eni-05a082dab7784e51f 35.203.211.189 172.31.61.177 54623 5800 6 1 44 1723573216 1723573232 REJECT OK"}, which can be seen in the Splunk Cloud Platform screenshot above. Finally, select the desired data destination from the list and click “Done” to create your pipeline. Since a new pipeline has been created, we can now use various SPL2 commands to extract information from the flow log and map it to CIM-compliant field names. For AWS flow logs specifically, the default record format—referenced in Step 6 of the previous section—is of the form: ${version} ${account-id} ${interface-id} ${srcaddr} ${dstaddr} ${srcport} ${dstport} ${protocol} ${packets} ${bytes} ${start} ${end} ${action} ${log-status}. According to Splunk’s field mapping documentation, the following changes will need to be made in order to achieve CIM compliance: (not required) version account-id → vendor_account interface-id → dvc srcaddr → src_ip dstaddr → dest_ip srcport → src_port dstport → dest_port protocol → transport (unchanged) packets (unchanged) bytes (calculated) start, end → duration (not required) action (not required) log-status The next step involves implementing these changes in code. Notably, the rex command can be used to parse the raw flow log, extracting only fields that are essential for compliance. Fields like version, action, and log-status—which are not required—should be intentionally excluded from this extraction process, ensuring that only necessary information is retained. Additionally, the pipeline should calculate the duration of the network session using the provided start and end timestamps in order to align with the data model specified by the CIM. Finally, the fields command can help remove the start and end fields from the log, as they are not needed after calculating duration and can thus be ignored. Here’s an example of what the resulting SPL2 may look like: $pipeline = | from $source     | rex field=_raw /{"message":"\S+ (?P<vendor_account>\S+) (?P<dvc>\S+) (?P<src_ip>\S+) (?P<dest_ip>\S+) (?P<src_port>\S+) (?P<dest_port>\S+) (?P<transport>\S+) (?P<packets>\S+) (?P<bytes>\S+) (?P<start>\S+) (?P<end>\S+) \S+ \S+"}/     | eval duration = end - start     | fields - start, end     | into $destination; Now that all the data transformation logic is in place, the only remaining step is to save the pipeline and apply it to your running Edge Processor: In the top-right corner of the pipeline editor, click the “Save pipeline” button, provide a required name and an optional description, and click “Save”. After a few seconds, you’ll be met with a popup titled “Apply pipeline”. Click “Yes, apply”, select the targeted Edge Processor(s), and click “Save” in the bottom-right corner. A notification should appear indicating that the pipeline update may take a few minutes to propagate. To check on the status of your processor, click the Splunk icon in the top-left corner to navigate back to the landing page, select the “Edge Processors” tab on the left-hand side, and monitor its “Instance Health”. It should eventually reach a healthy (i.e. green) status. Logs routed to your specified destination should now contain the CIM-compliant fields appended above. Conclusion With the introduction of Amazon Data Firehose support in Edge Processor, managing and analyzing your AWS data streams has never been easier. This update not only expands your data source options but also enhances the reliability of data transmission with receiver acknowledgement for upstream HEC data. Whether you’re monitoring cloud infrastructure, analyzing security events, or ensuring CIM compliance, these new capabilities provide you with the tools needed to optimize your Splunk environment. We encourage you to explore these features and see how they can enhance your data processing workflows. To get started with one (or both!) of our Data Management pipeline builders, fill out the following form. For more Edge Processor resources, check out the Data Management Resource Hub. If you’d like to request a feature or provide any other feedback, we strongly encourage you to create a Splunk Idea and/or send an email to edgeprocessor@splunk.com. You can also join the lively discussion in the #edge-processor channel of the splunk-usergroups workspace in Slack. It’s an excellent forum to learn from the community on the latest Edge Processor use-cases. Happy Splunking!
I am trying to ingest Proofpoint TAP logs to our Splunk enviornment and noticed that our Proofpoint TAP app is showing the Dashboards for the Cisco FMC app for some reason. I thought maybe I could re... See more...
I am trying to ingest Proofpoint TAP logs to our Splunk enviornment and noticed that our Proofpoint TAP app is showing the Dashboards for the Cisco FMC app for some reason. I thought maybe I could resolve it by deleting the app and reinstalling it but even after doing that it is still showing the FMC app. Has anyone seen this before? I tried looking for other posts with this issue but my search is coming up short.
Hello, I'm attempting to display a group of logs by the tranId. We log multiple user actions under a single tranId.  I'm attempting to group all of the logs for a single tranId in my dashboard. I... See more...
Hello, I'm attempting to display a group of logs by the tranId. We log multiple user actions under a single tranId.  I'm attempting to group all of the logs for a single tranId in my dashboard. I think I figured out how I want to display the logs, but I can't get the datetime format to correctly display. index blah blah | eval msgTxt=substr(msgTxt, 1, 141) | stats list(_time) as DateTime list(msgTxt) as Message list(polNbr) as QuoteId by tranId | eval time=strftime(_time," %m-%d-%Y %I:%M:%S %p") | streamstats count as log by tranId | eval tranId=if(log=1,tranId,"") | fields - log   Please help with displaying date and time format. Thanks 
My Splunk Search is as follows index="someindex" cf_space_name="somespace" msg.severity="*" | rex field=msg.message ".*METHOD:(?<method>.*),\sREQUEST_URI>.*),\sRESPONSE_CODE:(?<responseCode>.*),\sRE... See more...
My Splunk Search is as follows index="someindex" cf_space_name="somespace" msg.severity="*" | rex field=msg.message ".*METHOD:(?<method>.*),\sREQUEST_URI>.*),\sRESPONSE_CODE:(?<responseCode>.*),\sRESPONSE_TIME:(?<responseTime>.*)\sms" | stats count by msg.service,method, requestURI, responseCode | sort -count Result Table   msg.service method requestURI responseCode Count serviceA GET /v1/service/a 200 327 serviceB POST /v1/service/b 200 164 serviceA POST /v1/service/a 200  91   Under Visualization, I am trying to change this as a bar chart. I am getting all four fields on the x-axis. msg.service is mapped with count, and responseCode is mapped with responseCode. The other 2 fields are not visible since they are non-numeric fields.  if I remove fields using the following I get the proper chart (just msg.service mapped with count) my query | fields -responseCode, method, reqeustURI But I need something like this on the x and y axis x axis y axis serviceA GET v1/service/a 200 327 serviceB POST /v1/service/b 200 164 serviceA POST/v1/service/a 200  91   How to achieve this?  
What’s up Splunk Community! Welcome to the latest edition of the Observability Round-Up, a monthly series in which we spotlight the latest and greatest content that our crack team of experts has thou... See more...
What’s up Splunk Community! Welcome to the latest edition of the Observability Round-Up, a monthly series in which we spotlight the latest and greatest content that our crack team of experts has thoughtfully crafted for you. Let’s dive into our top 3 topics this month! Tech Talk Recap: Out-of The Box to Up and Running: Streamlined Observability for your Cloud Environment Earlier this month, we held a terrific Tech Talk with presenters Joe DeBlaquiere, Moss Normand, and @Teneil_Lawrence going in-depth on the most recent advancements in out-of-the-box capabilities within Observability Cloud.  The key topics covered were: The ease with which you can set up the Splunk Distribution of the OpenTelemetry Collector. The vast coverage and deep insights delivered by Splunk navigators, including the Kubernetes navigator. The most useful built-in detectors and alerts to streamline your troubleshooting experience. Watch the Session On-Demand Community Office Hours: Getting Data into Observability Cloud We kept things rolling with an “Ask the Experts” session on getting data into an Observability Cloud environment. Our panelists covered a number of customer requests, including: An overview of Ingest Processor, a new pipeline builder with a “logs to metrics” capability that allows Splunk Cloud users to convert logs to metrics and route them to Observability Cloud for additional insights and cost optimization. (View live solution from 14:59-38:46 of the session recording and see slides 8-18 of the deck) Demo Request: How do I set up the Splunk OpenTelemetry Collector for Kubernetes with Splunk Enterprise via the HEC exporter?  (View live solution from 39:14-46:36 of the recording and see slide 19 of the deck) Demo Request: Can you show us how to do process monitoring via the OTel Collector? (See slides 20-22 of the deck and view live solution from 51:30-57:40 of the recording) Watch the Session Recording On-Demand (starts at 4:00 mark)  View the Q&A Slide Deck Learn to enable Log Observer Connect for AppDynamics Finally, you might remember a small development earlier this year in which Splunk became a part of Cisco One exciting aspect of the acquisition was that AppDynamics was brought into the Splunk Observability fold, and our teams have been hard at work on the integration roadmap. We’re pleased to share that one of the first pillars of the integration plan is complete and that Log Observer Connect for AppDynamics is now available. If you’d like to learn how to set this up, we have a brand spanking new Lantern article with step-by-step instructions and an accompanying video demo.                 __________________________________________________________________________ If you have any content requests, we are always happy to hear them! Drop me a line at avirani@splunk.com.  Until next time, Arif
Hello,  I want to initialize a token with the week number value of today. According to the documentation, https://docs.splunk.com/Documentation/SCS/current/Search/Timevariables, the variable to us... See more...
Hello,  I want to initialize a token with the week number value of today. According to the documentation, https://docs.splunk.com/Documentation/SCS/current/Search/Timevariables, the variable to use to get the week of the year (1 to 52) is %V. This works on any search query, but this is not working when used in a <init> tag of a dashboard. This is my <init>: <form version="1.1" theme="dark"> <init> <eval token="todayYear">strftime(now(), "%Y")</eval> <eval token="todayMonth">strftime(now(), "%m")</eval> <eval token="todayWeek">strftime(now(), "%V")</eval> <eval token="yearToken">strftime(now(), "%Y")</eval> <eval token="monthToken">strftime(now(), "%m")</eval> </init> ... All these tokens are well initialized except to todayWeek, which refers to %V variable, which take no value.  What am I doing wrong?
Hi,  i'm trying to learn how appendpipe works, to do that i've tried to do this dummy search, and i don't understand why appendpipe returns the highlighted row.    
Hi Splunk, https://www.splunk.com/en_us/pdfs/training/soc-essentials-investigating-and-threat-hunting-course-description.pdf I can find this course in the course catalog page, https://www.spl... See more...
Hi Splunk, https://www.splunk.com/en_us/pdfs/training/soc-essentials-investigating-and-threat-hunting-course-description.pdf I can find this course in the course catalog page, https://www.splunk.com/en_us/training/course-catalog.html but cannot find in the enrollment page, https://education.splunk.com/Saba/Web_spf/NA10P2PRD105/app/shared;spf-url=common%2Flearningcatalog%2F#/guest/trqledetail/cours000000000003416 asking for a friend. best, volston
Greetings, I found some useful savedsearches under SA-AccessProtection / DA-ESS-AccessProtection, which I am interested in using. However, I'd like to understand these use-cases before making them l... See more...
Greetings, I found some useful savedsearches under SA-AccessProtection / DA-ESS-AccessProtection, which I am interested in using. However, I'd like to understand these use-cases before making them live.   Are these apps and their content documented somewhere? So far, I have not had any luck.   Thanks!
I have a dashboard that a specific team uses. Today, they asked about why one of the panels was broken. Looking into it, we were receiving this error from the search:     Error in 'fit' command: E... See more...
I have a dashboard that a specific team uses. Today, they asked about why one of the panels was broken. Looking into it, we were receiving this error from the search:     Error in 'fit' command: Error while fitting "StateSpaceForecast" model: timestamps not continuous: at least 33 missing rows, the earliest between "2024-01-20 07:00:00" and "2024-01-20 09:00:00", the latest between "2024-10-02 06:00:00" and "2024-10-02 06:00:01"     That seemed pretty straight forward, I thought we might be missing some timestamp values. This is the query we are running:     |inputlookup gslb_query_last505h.csv | fit StateSpaceForecast "numRequests" holdback=24 forecast_k=48 conf_interval=90 output_metadata=true period=120     Looking into the CSV file itself, I went to look for missing values under the numRequests column. We have values for each hour going back for almost a year. The timestamps mentioned in the error look like: Looking at that SS now, There is an hour missing there. The timestamp for 08:00. That may be the cause. How would I go about efficiently finding the 33 missing values? Each value missing would be in-between any two hours. Will I have to go through and find skipped hours among 8k results in the CSV file?    Thanks for any help. 
Is there any guide on how to configure security products to send their logs to Splunk or what are the recommended logs that should be sent, like the DSM guide in QRadar?
The operation of smartstore has been confirmed. I have a question regarding the 100GB of EBS attached to EC2. If you do not put the max_cache_size setting in indexes.conf, Will it freeze if the ca... See more...
The operation of smartstore has been confirmed. I have a question regarding the 100GB of EBS attached to EC2. If you do not put the max_cache_size setting in indexes.conf, Will it freeze if the cache is full to 100GB?   In another test, an EBS created with 10GB would freeze with a full capacity error if max_cache_size was not set.   What I would like to ask is whether if I don't set max_cache_size, it will stop when the volume becomes full.
I have a splunk query which generates output in csv/table format. I wanted to convert this to a json format before writing it into a file. tojson does the job of converting. However the fileds are no... See more...
I have a splunk query which generates output in csv/table format. I wanted to convert this to a json format before writing it into a file. tojson does the job of converting. However the fileds are not in the order I expect it to be. Table output: timestamp,Subject,emailBody,operation --> resulting JSON output is in the order subject,emailbody,operation,timestamp. How do I manipulate tojson to write fields in this order or is there an alternate way of getting json output as expected?