All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Well... that's why normally you hire a skilled architect for such job But seriously. There are several different issues to address here and they might be resolved in different ways depending on ... See more...
Well... that's why normally you hire a skilled architect for such job But seriously. There are several different issues to address here and they might be resolved in different ways depending on your particular circumstances. Like with your "migrate then upgrade" vs. "upgrade then migrate" question. They are both valid scenarios and both have their pros and cons. I'll probably go for upgrade then migrate because you limit the number of upgrades you have to do but you might want to do the migrate first to have a target architecture before attempting upgrades (and being able to roll back to an old version but with updated architecture). The typical approach would be probably to first separate the SH tier, make sure that everything is working OK and then expand your indexer tier to clustered setup. Yes, you need a separate CM. Maybe theoretically you could host the CM on the same machine as SH but you should definitely _not_ do that. SH is for searching CM is for managing the cluster. And you don't want to mix those functionalities. The specs... depend on your projected usage and load. As simple as that. We can't tell you beforehand what your load will be. Can't help with S3 issue - not enough experience to reliably advise something.
One important thing - Splunk is not a "monitoring tool". So it will not tell you about the state of the service (unless you have a specific input listing states of services periodically but as far as... See more...
One important thing - Splunk is not a "monitoring tool". So it will not tell you about the state of the service (unless you have a specific input listing states of services periodically but as far as I know there is no such input by default). You will only be able to see an event saying that the service has stopped or crashed or whatever if such event is logged (and if you're ingesting such events).
1. This is a very old thread. For better visibility you should rather start a new thread with a verbose description of your problem. 2. What does "doesn't work" mean? And did you do _any_ configurat... See more...
1. This is a very old thread. For better visibility you should rather start a new thread with a verbose description of your problem. 2. What does "doesn't work" mean? And did you do _any_ configuration on Splunk's side or do you just expect Splunk to work out of the blue? 3. As a side note - receiving syslog directly on Splunk component is not the recommended way. The recommended architecture is to use an external syslog daemon and either write to files from which you'll pick up the events with UF or send to HEC input.
https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTinput#services.2Fcollector.2Fevent services/collector/event Sends timestamped events to HTTP Event Collector using the Splunk platfor... See more...
https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTinput#services.2Fcollector.2Fevent services/collector/event Sends timestamped events to HTTP Event Collector using the Splunk platform JSON event protocol when auto_extract_timestamp is set to "true" in the /event URL. An example of a timestamp is: 2017-01-02 00:00:00. If there is a timestamp in the event's JSON envelope, Splunk honors that timestamp first. If there is no timestamp in the event's JSON envelope, the merging pipeline extracts the timestamp from the event. If "time=xxx" is used in the /event URL then auto_extract_timestamp is disabled. Splunk supports timestamps using the Epoch format. In other words - unless you specify your URI as /services/collector/event?auto_extract_timestamp=true your timestamp will _not_ be extracted from the event itself (Splunk will not even bother looking for it - it will either get the data from the json envelope or will assume current timestamp if there is no timestamp in the envelope). And even if the auto_extract_timestamp parameter is set to true, in cases listed above extraction is not performed either. See also https://www.aplura.com/assets/pdf/hec_pipelines.pdf
You _probably_ (haven't tested it myself but I don't see why it shouldn't work) could do it using INGEST_EVAL. Something like   [host::hostA] TRANSFORMS-hostA = send_to_syslog [send_to_syslog] RE... See more...
You _probably_ (haven't tested it myself but I don't see why it shouldn't work) could do it using INGEST_EVAL. Something like   [host::hostA] TRANSFORMS-hostA = send_to_syslog [send_to_syslog] REGEX = . INGEST_EVAL = _SYSLOG_ROUTING=if(source="whatever","my_syslog_group",null())   EDIT: OK, there is obvously a much easier way I forgot about. [send_to_syslog] REGEX = /somewhere/my/source/file.txt SOURCE_KEY = MetaData:Source DEST_KEY = _SYSLOG_ROUTING FORMAT = my_syslog_group
Hi! Is there any way to make data retrival rate slower? Something like 1h worth of data every 1m When we are trying to save 30D data from our elastic(about 4.4m events) server makes huge network loa... See more...
Hi! Is there any way to make data retrival rate slower? Something like 1h worth of data every 1m When we are trying to save 30D data from our elastic(about 4.4m events) server makes huge network load spike and then stops responding.
If your LB is indeed a HTTP proxy, there is  a fat chance that you're already getting the X-Forwarded-For header. So it might be enough to enable the option I mentioned earlier.
I found related case in the splunk Ideas https://ideas.splunk.com/ideas/EID-I-1168, it's kinda complicated, since I 'm a new in splunk and the SHC architecture are not simple.
okay, so I need to talk with the LB/network team who fix this thing then.
Thank you Giuseppe. But the problem is related to filtering just one of the sources of that host. If you place a REGEX that is able to catch only the events of that specific source, you're good to ... See more...
Thank you Giuseppe. But the problem is related to filtering just one of the sources of that host. If you place a REGEX that is able to catch only the events of that specific source, you're good to go. But imagine you don't have a REGEX that can catch all the events of that source. how can you filter?  
I think you're lookin for fillnull_value  https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/tstats#Optional_arguments
can you explain me where I put the perimeter.csv and can you show me a little example how this file looks like? thx
Hi fellows,  It's time to migrate our good old Standalone Splunk Enterprise to 'Small Enterprise deployment'. I read through tons of docs, unfortunately, I didn't find any step-by-step guide, so I ... See more...
Hi fellows,  It's time to migrate our good old Standalone Splunk Enterprise to 'Small Enterprise deployment'. I read through tons of docs, unfortunately, I didn't find any step-by-step guide, so I have many questions. May Some of you can help. - The existing server is CentOs7, the new servers will be Ubuntu 22.04. Just before the migration, I plan to upgrade Splunk on it from 9.1.5 to the latest 9.3.1.  (it wasn't updated because Centos 7 is not supported by 9.3.1)  OR do I set up the new servers with 9.1.5 and upgrade them after the migration? - Our daily volume is 3-400 GB/ day. It will not grow drastically in the medium term. What are your recommended hardware specs for the indexers? Can we use the "Mid-range indexer specification" or go for the "High-performance indexer specification" (as described at https://docs.splunk.com/Documentation/Splunk/9.3.1/Capacity/Referencehardware ) - If I understand correctly I can copy the /etc/apps/ from the old server to the new Search Head, so I will have all the apps, saved searches, etc. But what config files must be modified to get the data for the new indexers? (For forwarders this is clear, but we are using a lot of other inputs (Rest-API, HEC, scripted) - Do I configure our existing server as part of the indexer cluster (3rd member), then when all the replications are done on the new servers remove it from the cluster, or copy the index data to one of the new indexers - rename the buckets ( adding the indexer's unique id) and let the cluster manager do the job? (Do I need a separate Cluster manager or the SH could do the job?) And here comes the big twist... - Currently, we using S3 storage via NAS Bridge for the cold buckets. This solution is not recommended and we are already experiencing fallbacks. So, we planned to change the configuration to use SmartStore. How I can move the current cold buckets there? (a lot of data and because of the NAS Bridge this is very very slow to copy...)  Thanks in advance Norbert  
Thanks @gcusello . I will try this and let you know.  You are right about the lookup. Whenever the IP values changes, the lookup file also needs to be updated. I managed to get the output using the ... See more...
Thanks @gcusello . I will try this and let you know.  You are right about the lookup. Whenever the IP values changes, the lookup file also needs to be updated. I managed to get the output using the lookup earlier, but then I tried to update the lookup file in the same query which messed up the column names, and the query won't work anymore since the values changed. I will try the summary method and let you know. Thanks again
Hello, I want to send syslog entrees to splunk directly. The config is done with the command "syslogadmin --set -ip xxx.xxx.xxxx.xxx -port 65456. Here is the configuration in place. But it does not... See more...
Hello, I want to send syslog entrees to splunk directly. The config is done with the command "syslogadmin --set -ip xxx.xxx.xxxx.xxx -port 65456. Here is the configuration in place. But it does not work Are there other ports or other configuration to set up? Any ideas ? syslogadmin --show -ip syslog.1 xxx.xxx.xxx.29 port 65456 syslog.2 xxx.xxx.xxx.30 port 65456 syslog.3 xxx.xxx.xxx.31 port 65456 syslog.4 xxx.xxx.xxx.80 port 65456 syslogadmin --show -facility Syslog facility: LOG_LOCAL7 auditcfg --show -filter Audit filter is enabled. 1-ZONE 2-SECURITY 3-CONFIGURATION 4-FIRMWARE 5-FABRIC 7-LS 8-CLI 9-MAPS Severity level: INFO
| metadata type=hosts index=*
By default HEC is running on HTTPS. If you really want to disable SSL then you can change it by doing the below  - In Splunk UI Goto -> Settings -> Data Inputs -> HTTP Event Collector - Click on "G... See more...
By default HEC is running on HTTPS. If you really want to disable SSL then you can change it by doing the below  - In Splunk UI Goto -> Settings -> Data Inputs -> HTTP Event Collector - Click on "Global Settings" Button and uncheck the "Enable SSL" Option ------ If you find this solution helpful, please consider accepting it and awarding karma points !!  
You can do this with CSS - essentially you need to have a multivalue field with the colour you want in the second value - this can be the result of some calculation e.g. based on the first three char... See more...
You can do this with CSS - essentially you need to have a multivalue field with the colour you want in the second value - this can be the result of some calculation e.g. based on the first three characters of the field - then you can follow this link to see how it is done in similar circumstances https://community.splunk.com/t5/Dashboards-Visualizations/How-to-update-table-cell-color-as-per-the-another-field/m-p/599965#M49240  
How to get an output containing all host details of all time along with their last update times?  Below search is taking huge time, how to get this optimized for faster search - index=*| fields h... See more...
How to get an output containing all host details of all time along with their last update times?  Below search is taking huge time, how to get this optimized for faster search - index=*| fields host, _time | stats max(_time) as last_update_time by host | eval t=now() | eval days_since_last_update=tonumber(strftime((t-last_update_time),"%d"))-1 | where days_since_last_update>30 | eval last_update_time=strftime(last_update_time, "%Y-%m-%d %H:%M:%S") | table last_update_time host days_since_last_update