All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTinput#services.2Fcollector.2Fevent services/collector/event Sends timestamped events to HTTP Event Collector using the Splunk platfor... See more...
https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTinput#services.2Fcollector.2Fevent services/collector/event Sends timestamped events to HTTP Event Collector using the Splunk platform JSON event protocol when auto_extract_timestamp is set to "true" in the /event URL. An example of a timestamp is: 2017-01-02 00:00:00. If there is a timestamp in the event's JSON envelope, Splunk honors that timestamp first. If there is no timestamp in the event's JSON envelope, the merging pipeline extracts the timestamp from the event. If "time=xxx" is used in the /event URL then auto_extract_timestamp is disabled. Splunk supports timestamps using the Epoch format. In other words - unless you specify your URI as /services/collector/event?auto_extract_timestamp=true your timestamp will _not_ be extracted from the event itself (Splunk will not even bother looking for it - it will either get the data from the json envelope or will assume current timestamp if there is no timestamp in the envelope). And even if the auto_extract_timestamp parameter is set to true, in cases listed above extraction is not performed either. See also https://www.aplura.com/assets/pdf/hec_pipelines.pdf
You _probably_ (haven't tested it myself but I don't see why it shouldn't work) could do it using INGEST_EVAL. Something like   [host::hostA] TRANSFORMS-hostA = send_to_syslog [send_to_syslog] RE... See more...
You _probably_ (haven't tested it myself but I don't see why it shouldn't work) could do it using INGEST_EVAL. Something like   [host::hostA] TRANSFORMS-hostA = send_to_syslog [send_to_syslog] REGEX = . INGEST_EVAL = _SYSLOG_ROUTING=if(source="whatever","my_syslog_group",null())   EDIT: OK, there is obvously a much easier way I forgot about. [send_to_syslog] REGEX = /somewhere/my/source/file.txt SOURCE_KEY = MetaData:Source DEST_KEY = _SYSLOG_ROUTING FORMAT = my_syslog_group
Hi! Is there any way to make data retrival rate slower? Something like 1h worth of data every 1m When we are trying to save 30D data from our elastic(about 4.4m events) server makes huge network loa... See more...
Hi! Is there any way to make data retrival rate slower? Something like 1h worth of data every 1m When we are trying to save 30D data from our elastic(about 4.4m events) server makes huge network load spike and then stops responding.
If your LB is indeed a HTTP proxy, there is  a fat chance that you're already getting the X-Forwarded-For header. So it might be enough to enable the option I mentioned earlier.
I found related case in the splunk Ideas https://ideas.splunk.com/ideas/EID-I-1168, it's kinda complicated, since I 'm a new in splunk and the SHC architecture are not simple.
okay, so I need to talk with the LB/network team who fix this thing then.
Thank you Giuseppe. But the problem is related to filtering just one of the sources of that host. If you place a REGEX that is able to catch only the events of that specific source, you're good to ... See more...
Thank you Giuseppe. But the problem is related to filtering just one of the sources of that host. If you place a REGEX that is able to catch only the events of that specific source, you're good to go. But imagine you don't have a REGEX that can catch all the events of that source. how can you filter?  
I think you're lookin for fillnull_value  https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/tstats#Optional_arguments
can you explain me where I put the perimeter.csv and can you show me a little example how this file looks like? thx
Hi fellows,  It's time to migrate our good old Standalone Splunk Enterprise to 'Small Enterprise deployment'. I read through tons of docs, unfortunately, I didn't find any step-by-step guide, so I ... See more...
Hi fellows,  It's time to migrate our good old Standalone Splunk Enterprise to 'Small Enterprise deployment'. I read through tons of docs, unfortunately, I didn't find any step-by-step guide, so I have many questions. May Some of you can help. - The existing server is CentOs7, the new servers will be Ubuntu 22.04. Just before the migration, I plan to upgrade Splunk on it from 9.1.5 to the latest 9.3.1.  (it wasn't updated because Centos 7 is not supported by 9.3.1)  OR do I set up the new servers with 9.1.5 and upgrade them after the migration? - Our daily volume is 3-400 GB/ day. It will not grow drastically in the medium term. What are your recommended hardware specs for the indexers? Can we use the "Mid-range indexer specification" or go for the "High-performance indexer specification" (as described at https://docs.splunk.com/Documentation/Splunk/9.3.1/Capacity/Referencehardware ) - If I understand correctly I can copy the /etc/apps/ from the old server to the new Search Head, so I will have all the apps, saved searches, etc. But what config files must be modified to get the data for the new indexers? (For forwarders this is clear, but we are using a lot of other inputs (Rest-API, HEC, scripted) - Do I configure our existing server as part of the indexer cluster (3rd member), then when all the replications are done on the new servers remove it from the cluster, or copy the index data to one of the new indexers - rename the buckets ( adding the indexer's unique id) and let the cluster manager do the job? (Do I need a separate Cluster manager or the SH could do the job?) And here comes the big twist... - Currently, we using S3 storage via NAS Bridge for the cold buckets. This solution is not recommended and we are already experiencing fallbacks. So, we planned to change the configuration to use SmartStore. How I can move the current cold buckets there? (a lot of data and because of the NAS Bridge this is very very slow to copy...)  Thanks in advance Norbert  
Thanks @gcusello . I will try this and let you know.  You are right about the lookup. Whenever the IP values changes, the lookup file also needs to be updated. I managed to get the output using the ... See more...
Thanks @gcusello . I will try this and let you know.  You are right about the lookup. Whenever the IP values changes, the lookup file also needs to be updated. I managed to get the output using the lookup earlier, but then I tried to update the lookup file in the same query which messed up the column names, and the query won't work anymore since the values changed. I will try the summary method and let you know. Thanks again
Hello, I want to send syslog entrees to splunk directly. The config is done with the command "syslogadmin --set -ip xxx.xxx.xxxx.xxx -port 65456. Here is the configuration in place. But it does not... See more...
Hello, I want to send syslog entrees to splunk directly. The config is done with the command "syslogadmin --set -ip xxx.xxx.xxxx.xxx -port 65456. Here is the configuration in place. But it does not work Are there other ports or other configuration to set up? Any ideas ? syslogadmin --show -ip syslog.1 xxx.xxx.xxx.29 port 65456 syslog.2 xxx.xxx.xxx.30 port 65456 syslog.3 xxx.xxx.xxx.31 port 65456 syslog.4 xxx.xxx.xxx.80 port 65456 syslogadmin --show -facility Syslog facility: LOG_LOCAL7 auditcfg --show -filter Audit filter is enabled. 1-ZONE 2-SECURITY 3-CONFIGURATION 4-FIRMWARE 5-FABRIC 7-LS 8-CLI 9-MAPS Severity level: INFO
| metadata type=hosts index=*
By default HEC is running on HTTPS. If you really want to disable SSL then you can change it by doing the below  - In Splunk UI Goto -> Settings -> Data Inputs -> HTTP Event Collector - Click on "G... See more...
By default HEC is running on HTTPS. If you really want to disable SSL then you can change it by doing the below  - In Splunk UI Goto -> Settings -> Data Inputs -> HTTP Event Collector - Click on "Global Settings" Button and uncheck the "Enable SSL" Option ------ If you find this solution helpful, please consider accepting it and awarding karma points !!  
You can do this with CSS - essentially you need to have a multivalue field with the colour you want in the second value - this can be the result of some calculation e.g. based on the first three char... See more...
You can do this with CSS - essentially you need to have a multivalue field with the colour you want in the second value - this can be the result of some calculation e.g. based on the first three characters of the field - then you can follow this link to see how it is done in similar circumstances https://community.splunk.com/t5/Dashboards-Visualizations/How-to-update-table-cell-color-as-per-the-another-field/m-p/599965#M49240  
How to get an output containing all host details of all time along with their last update times?  Below search is taking huge time, how to get this optimized for faster search - index=*| fields h... See more...
How to get an output containing all host details of all time along with their last update times?  Below search is taking huge time, how to get this optimized for faster search - index=*| fields host, _time | stats max(_time) as last_update_time by host | eval t=now() | eval days_since_last_update=tonumber(strftime((t-last_update_time),"%d"))-1 | where days_since_last_update>30 | eval last_update_time=strftime(last_update_time, "%Y-%m-%d %H:%M:%S") | table last_update_time host days_since_last_update  
There are lots of different skills required or desirable for setting up and utilising Splunk, from architecture, infrastructure design, network management, data science, ux design, coding (to some de... See more...
There are lots of different skills required or desirable for setting up and utilising Splunk, from architecture, infrastructure design, network management, data science, ux design, coding (to some degree), etc. There are plenty of courses available through Splunk education, and other providers. Also, professional services can help you with this.
Hello @ITWhisperer, Thank you for your response. I wanted to let you know that API responses are not published in Splunk yet. While I have limited experience with SPL and haven't built any Splunk da... See more...
Hello @ITWhisperer, Thank you for your response. I wanted to let you know that API responses are not published in Splunk yet. While I have limited experience with SPL and haven't built any Splunk dashboards, I am very interested in working with Splunk. Looking forward to your guidance!  
Hello Splunkers!! I have ingested data into Splunk from the source system using the URI "https://localhost:8088/services/collector" along with the HEC token. However, the data is not being displaye... See more...
Hello Splunkers!! I have ingested data into Splunk from the source system using the URI "https://localhost:8088/services/collector" along with the HEC token. However, the data is not being displayed in Splunk with the appropriate sourcetype parsing, which is affecting the timestamp settings for the events. The sourcetype and timestamp are currently being displayed as below. My actual props.conf setting as below : [agv_voot] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom KV_MODE = json pulldown_type = 1 TIME_PREFIX = ^\@timestamp TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3N TIMESTAMP_FIELDS = @timestamp TRANSFORMS-trim_timestamp = trim_long_timestamp transforms.conf [trim_long_timestamp] REGEX = (\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{3})\d+(-\d{2}:\d{2}) FORMAT = $1 Please help to fix the proper parsing with correct sourcetype and timestamp.