All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes, you can achieve this by using a Python script as a scripted input in Splunk. You can read the data using Python, perform the modifications as you described (decoding the JSON, updating the dicti... See more...
Yes, you can achieve this by using a Python script as a scripted input in Splunk. You can read the data using Python, perform the modifications as you described (decoding the JSON, updating the dictionary, and re-encoding it), and output the modified data. Here's how it works: Create a Python Script: Read the incoming data. Apply the necessary transformations. Print the modified JSON to standard output (stdout). Configure Scripted Input in Splunk: Go to Settings > Data Inputs > Scripts. Add a new scripted input and select your Python script. Set a cron schedule for when the script should run. The script will run at the configured intervals, fetch the data, apply your changes, and send the transformed data to Splunk for indexing. Important Consideration: The main limitation is that data ingestion will depend on the cron schedule of the scripted input, so real-time or very frequent data processing might not be achievable. Adjust the schedule as needed based on your data update frequency.
Thank you for your reply. I can't pre-process the events before ingestion in Splunk because they are directly sent by an appliance to a hec input. Christian
Hi @ktn01 , yes, it's possible, but it isn't related to Splunk because it pre-processes data before ingestion: I did it for a customer. Put attention to one issue: changing the format of your logs,... See more...
Hi @ktn01 , yes, it's possible, but it isn't related to Splunk because it pre-processes data before ingestion: I did it for a customer. Put attention to one issue: changing the format of your logs, you have to completely rebuild the parsing rules for your data because the standard parsing rules aren't still applicable to the new data format. Ciao. Giuseppe
Is it possible to use a python script to perform transforms during event indexing? My aim is to remove keys from json files to reduce volume. I'm thinking of using a python script that decodes the j... See more...
Is it possible to use a python script to perform transforms during event indexing? My aim is to remove keys from json files to reduce volume. I'm thinking of using a python script that decodes the json, modifies the resulting dict and then encodes the result in a new json that will be indexed.
| eval Device=Device_name.":".src_ip | table Device state_to count primarycolor secondarycolor info_min_time info_max_time
@luizlimapg , thanks for your reply. Is there any confirmation after curl or ways to check if password is added successfully, is there any other way to add a password?     Enter host password for ... See more...
@luizlimapg , thanks for your reply. Is there any confirmation after curl or ways to check if password is added successfully, is there any other way to add a password?     Enter host password for user 'user': curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number     I found that this is possible using the same curl, but i got an error:   curl -k -u user https://localhost:8089/servicesNS/nobody/app/storage/passwords/ curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number    
Hi    trying to build a dashboard for user gateaccess, How to visualise this in a live data.   I am looking for some inbuilt visuaisations that helps for this, something like a missilemap but for... See more...
Hi    trying to build a dashboard for user gateaccess, How to visualise this in a live data.   I am looking for some inbuilt visuaisations that helps for this, something like a missilemap but for user moving from one gate to other
Sorry for had being annoying, I'm stopping this behavior.
"Best practice" depends heavily on use case. There are some general best practices but they might not be suited well to a particular situation at hand. That's why I suggest involving a skilled profes... See more...
"Best practice" depends heavily on use case. There are some general best practices but they might not be suited well to a particular situation at hand. That's why I suggest involving a skilled professional who will review your detailed requirements and suggest appropriate solution.
Please stop UP-ing the thread. You haven't found a similar issue in old threads, noone seems to be able to help you here right now. It's time to engage support. Posting "UP" once a week only clutters... See more...
Please stop UP-ing the thread. You haven't found a similar issue in old threads, noone seems to be able to help you here right now. It's time to engage support. Posting "UP" once a week only clutters the forum. Thanks for understanding.
Thank you for your detailed response! If I were to implement a Heavy Forwarder (HF) in my architecture, would this be the correct approach? Additionally, would it be considered a best practice for f... See more...
Thank you for your detailed response! If I were to implement a Heavy Forwarder (HF) in my architecture, would this be the correct approach? Additionally, would it be considered a best practice for forwarding data to an external system?
Hi everyone, I've recently integrated Lansweeper (cloud) data into my Splunk Cloud instance, but over the past few days, I've been encountering some ingestion issues. I used the add-on: https://spl... See more...
Hi everyone, I've recently integrated Lansweeper (cloud) data into my Splunk Cloud instance, but over the past few days, I've been encountering some ingestion issues. I used the add-on: https://splunkbase.splunk.com/app/5418 Specifically, the data source intermittently stops sending data to Splunk without any clear pattern. Here's what I've checked so far: My configuration seems fine, and the polling interval is set to 300 seconds. The ingestion behavior appears inconsistent, as seen in the attached image. Based on the type of data Lansweeper generates, I wouldn't expect this inconsistency. While double-checking my configuration, I noticed an error, yet the source still manages to ingest data sporadically at certain hours.   Has anyone experienced similar issues or could provide guidance on how to debug this further? Thanks in advance for your help! LansweeperLansweeper Add-on for SplunkLansweeper Add On For Splunk 
I apologize for the mistake in my previous reply about forwarding data. To clarify, the data to be forwarded will be new data only. Regarding your question, could you clarify what you mean by "how m... See more...
I apologize for the mistake in my previous reply about forwarding data. To clarify, the data to be forwarded will be new data only. Regarding your question, could you clarify what you mean by "how much data"? Are you asking about the data volume per day or the total size of all data? The data to be forwarded comes from two indexers, and it includes all indexes.
UP
OK. You can't "forward" already existing data. You need to search the indexes, create a "dump" of the results and push them to the other solution. For the continuously incoming data you can either u... See more...
OK. You can't "forward" already existing data. You need to search the indexes, create a "dump" of the results and push them to the other solution. For the continuously incoming data you can either use syslog output on your indexers or a s2s output to an external component (either a HF or a third-party solution which can talk s2s). The caveat here is that it introduces additional complexity and possible points of failure on your indexers. If your architecture had a separate HF layer before indexers through which all input streams went you could do that on that HF layer instead of indexers. General forwarding to an external system solution is tricky to do well. You might want to engage PS or your local Splunk Partner.
Nope. That was the issue that I had. If yours is caused  by something else - you might need to raise a support ticket.
How much data you have? Only one/some indexes or all?
If I recall right SHC shouldn't replicate those files in etc/system/local . Those are host specific local files by default. Are you absolutely sure that your host is defined in inputs.conf file und... See more...
If I recall right SHC shouldn't replicate those files in etc/system/local . Those are host specific local files by default. Are you absolutely sure that your host is defined in inputs.conf file under system/local instead of inside some app? Can you check it from CLI with command "splunk btool inputs list --debug | egrep host"? Unfortunately this gives a lot entries, but you can see if there is also 'etc/system/local' on list.
Hi @PickleRick, Do you any clue for fix with any other possible way workaround ?
Thank you for your response. We will need to forward both old and new data, and the process of forwarding will be continuous.