Activity Feed
- Karma Re: developpement for gcusello. 02-07-2024 02:00 PM
- Posted Re: expand the table in the dashboard on Dashboards & Visualizations. 02-07-2024 01:53 PM
- Posted Re: How to calculate daily volume of logs ingested by index on Splunk Search. 02-07-2024 01:40 PM
- Karma Re: Between dates condition for gcusello. 02-03-2023 03:56 AM
- Karma Re: Is there a way to speed up Splunk restarts in a development environment? for scelikok. 01-29-2023 06:43 AM
- Got Karma for Re: What are the best practices for implementing Smartstore?. 01-27-2023 09:20 AM
- Got Karma for Re: Horizontal Bar Chart Bar Colors. 01-26-2023 09:19 AM
- Posted Re: Horizontal Bar Chart Bar Colors on Splunk Search. 01-26-2023 08:15 AM
- Got Karma for Re: What are the best practices for implementing Smartstore?. 01-25-2023 05:01 PM
- Karma Splunk Cloud Platform Migration Lessons Learned for WhitneySink. 01-25-2023 01:53 PM
- Posted Re: What are the best practices for implementing Smartstore? on Deployment Architecture. 01-25-2023 01:48 PM
- Posted Re: Bar graph not showing full y axis values on All Apps and Add-ons. 01-25-2023 12:30 PM
- Got Karma for Re: Regex help. 01-24-2023 03:13 AM
- Posted Re: Regex help on Getting Data In. 01-24-2023 02:47 AM
- Got Karma for Re: Regex help. 01-24-2023 02:32 AM
- Posted Re: Regex help on Getting Data In. 01-24-2023 02:29 AM
- Karma Re: Is there any way to set different email receipt for for "send email" alerting action ? for ITWhisperer. 01-24-2023 02:29 AM
- Karma Re: Is there any way to set different email receipt for for "send email" alerting action ? for ITWhisperer. 01-24-2023 02:29 AM
- Got Karma for Re: Does using Base Searches increase the performance of the dashboards?. 01-23-2023 05:32 PM
- Posted Re: Splunk HTTP Event Collector's definition of size limit on Getting Data In. 01-23-2023 12:41 PM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
1 |
02-07-2024
01:53 PM
hi @gitingua You could maybe try the solution mentioned in the comments here: https://community.splunk.com/t5/Dashboards-Visualizations/How-to-change-the-width-of-panels-in-xml/m-p/289203
... View more
02-07-2024
01:40 PM
hi @jaibalaraman try: index=_internal source=*metrics.log group=per_index_thruput earliest=-1w@d latest=-0d@d | timechart span=1d sum(kb) as Usage by series | eval Usage = round(Usage /1024/1024, 3)
... View more
01-26-2023
08:15 AM
1 Karma
Hi @jason_hotchkiss, I've done similar before, adding the below should work: <option name="charting.fieldColors">{"1 - High": <insert CSS color code for desired red here>, "2 - Medium": <insert CSS color code for desired orange here>, "3 - Low": <insert CSS color code for desired yellow here>, "unknown": <insert CSS color code for desired blue here>}</option> Thanks, Jamie
... View more
01-25-2023
01:48 PM
2 Karma
Hi @Chiranjeev88 It's been a couple of years since I worked with SmartStore but from what I can remember: 1. Understand the search profile of the indexes you're planning to move to smart store via a combination of searching the _audit index and speaking to the users of data to confirm how many searches are run over last 24 hours, last 7 days, last 30 days etc. If there are frequent searches looking back over long time ranges then reconsider moving the index to SmartStore. 2. Configure an appropriate cache size (I believe the requirements will increase if using accelerated data models) 3. Consider the extra requirement on the indexers disk due to SmartStore. When a search runs that requires data from the remote store, after the files / buckets are downloaded they have to be written to disk. The default setting of "max_concurrent_downloads" is 8 which means there can be 8 buckets being downloaded concurrently and therefore 8 buckets being written to the disk concurrently. Due to this, Splunk recommended we changed our disks to NVMe SSD before moving to SmartStore. You can reduce some of the extra requirement on the disk by reducing the number of concurrent downloads; of course this comes at the cost of search performance. 4. Consider the performance / current hardware of the remote store. Some object stores can only read a certain number of "large" objects (tsidx or raw data files) at one time before the performance degrades significantly. If you have 8 concurrent downloads per indexer and 50 indexers then when a search runs you can potentially be trying to read 400 large objects at one time. Confirm the remote store can deal with this number of reads concurrently. 5. Consider using workload rules (https://docs.splunk.com/Documentation/Splunk/9.0.3/Workloads/WorkloadRules) to control who can search the SmartStore indexes over longer time ranges to avoid users unsuspectingly downloading a large number of buckets. 6. If you have a lot of sparse or super sparse searches running over longer time ranges then consider increasing the default of "hotlist_bloom_filter_recency_hours" in server.conf (or indexes.conf to apply it to a specific index) to keep the bloomfilters and smaller metadata files for buckets on the disk of the indexers. 7. The recommended bucket size for SmartStore indexes is 750MB (maxDataSize = auto in indexes.conf) so if you have any auto_high_volume (10GB) indexes consider switching to auto a week or so before moving to SmartStore to ensure the buckets being searched are the recommended size. 8. Test SmartStore in production using a test index (e.g. by copying real data to a new index) before moving an index that is in use. This allows you to confirm the performance of the remote store and your indexers. Once you move an index to SmartStore you can't move it back. Thanks, Jamie
... View more
01-25-2023
12:30 PM
Hi @SS1 Have you tired making the panel bigger e.g. <option name="height">500</option> Thanks, Jamie
... View more
01-24-2023
02:47 AM
1 Karma
@bosseres Sorry, I tested it like: because I assumed the actual _raw field / event in Splunk would start like: "some text text text something something" is that not correct? thanks
... View more
01-24-2023
02:29 AM
1 Karma
Hi @bosseres, This will match everything from the start of the event until there are 3 or more spaces: | rex field=_raw "^(?<description>.+?)\s{3,}" Thanks, Jamie
... View more
01-23-2023
12:41 PM
Hi @King_Of_Shawn, To confirm, are you setting the HTTP "Content-Encoding" header to be "gzip"? Any other details you can provide about how you're sending the data would be useful. Thanks, Jamie
... View more
01-23-2023
12:37 PM
1 Karma
My experience is very much the same as this. I find base searches which are too generic and return a large number of non aggregate events perform very poorly. In addition, they also use a large amount of disk space which contributes to the users disk quota (until the search artifacts are removed). This can in turn lead to a lot of queuing for the users search jobs so it's worth noting how much disk space base searches use via the job inspector or the _introspection logs when developing the dashboard. In general I've found the best approach is to use base searches for summary panels at the top of dashboards which use aggregated data (e.g. single value visualizations) and avoid base searches when you want to display more non aggregated events (e.g. in a table). One other thing to note here is that, if the dashboard is very heavily used and you can have base searches to reduce the number of searches that run when the dashboard loads (or alternatively have panels behind check boxes etc.) then you could reduce your concurrent search load on your Splunk servers. This could be important in large distributed environments with a large number of concurrent users. In this case you might be happy to have the dashboard take a few extra seconds to load if it results in less searches being executed.
... View more
01-23-2023
11:09 AM
Hi @ipteam, Something like this should work for you: <your search goes here> | rex field=<insert field name that contains timestamp here> "(?<year>\d{4})\/(?<month>\d{2})\/(?<day>\d{2})\s+(?<hour>\d{2})\:(?<minutes>\d{2})\:(?<seconds>\d{2})" Thanks, Jamie
... View more
01-23-2023
10:37 AM
Hi @zpasplunk, how many indexers do you have in the cluster? I believe that maxTotalDataSizeMB is applied per indexer. Another way to remove old data from an index is to set: frozenTimePeriodInSecs = <number of seconds before data is rolled to frozen> One thing to note here is that each bucket contains data across a given time span (e.g. it could be 1 hour or 1 day) up to a maximum of the value set for maxHotSpanSecs The earliest timestamp of an event (oldest event) and latest timestamp of an event (most recent) are in the name of the bucket (https://docs.splunk.com/Documentation/Splunk/9.0.3/Indexer/HowSplunkstoresindexes#Bucket_naming_conventions). Before a bucket is rolled to frozen the timestamp of the most recent event must be older than the value set for frozenTimePeriodInSecs which means there can be cases where there is data that's older than you'd expect.
... View more
01-20-2023
02:48 PM
1 Karma
Hi @GaetanVP , I don't think this is possible. However, you can configure LINE_BREAKER to break lines on multiple different sequences of characters e.g. LINE_BREAKER = (\r\n)*({\"time\"|\{\"timestamp\") would break on both {"time" and {"timestamp". I'm not sure if this will meet your requirements but wanted to mention it just in case it does. Thanks, Jamie
... View more
07-31-2022
02:11 AM
Hi @si_infrastructu This might work for you: https://docs.splunk.com/Documentation/SCS/current/SearchReference/ConditionalFunctions#like.28.26lt.3Bstr.26gt.3B.2C_.26lt.3Bpattern.26gt.3B.29 ? Thanks, Jamie
... View more
07-31-2022
01:52 AM
Hi @Vani_26 From inputs.conf docs regrading "followTail" * If you set to "1", monitoring starts at the end of the file (like
*nix 'tail -f'). The input does not read any data that exists in
the file when it is first encountered. The input only reads data that
arrives after the first encounter time. So it could be that if you create a file there then none of the initial data will be indexed. The docs also don't recommend having followTail set to 1. I think best practice would be to create a new file each time there is new data to be written to Splunk. Thanks, Jamie
... View more
07-31-2022
01:35 AM
Hi @sanglap Can you just provide the time like: earliest="2022-07-29:18:33:20" latest="2022-07-29:18:34:20" There are some examples here: https://docs.splunk.com/Documentation/SCS/current/Search/Timemodifiers#Examples Thanks, Jamie
... View more
07-31-2022
01:21 AM
Hi @dorlevi Have you tired setting EVENT_BREAKER in props.conf on the UF? You'd also need to set EVENT_BREAKER_ENABLE = true and it might be worth looking at setting TRUNCATE if you have large events. (I'm assuming you're forwarding to something other than Splunk). Thanks, Jamie
... View more
07-27-2022
12:59 AM
1 Karma
Hi @neeravmathur Yes, it’s possible. It’s really just a case of configuring both cluster masters in server.conf of the SHs. thanks, jamie
... View more
06-28-2022
02:27 PM
Hi @lucacaldiero , In the docs (https://docs.splunk.com/Documentation/Splunk/9.0.0/RESTREF/RESTsearch#search.2Fjobs.2F.7Bsearch_id.7D) the example response to DELETE request is: <response><messages><msg type='INFO'>Search job cancelled.</msg></messages></response So, I wouldn't think the search artifacts are removed from disk by this REST call and therefore will still count towards the users disk quota. If you then also set the ttl for the searches to an hour you may just be hitting the disk quota for the user and therefore aren't able to run anymore searches until the search artifacts are cleared. You could confirm the above by checking for the artifacts of the executed searches in the $SPLUNK_HOME$/var/run/splunk/dispatch/ are removed after the DELETE request and check the _audit index to confirm if the disk quota for the user executing the searches has been reached. Thanks, Jamie
... View more
- Tags:
- hi
06-28-2022
03:28 AM
Hi @timo258 , After the mvexpand you could try: | stats sum(Hours) as Hours_total by Skill SkillLevel | stats list(Skill) list(SkillLevel) by Hours_total Then use table and/or rename as you need to get the correct order and name of columns. Thanks, Jamie
... View more
06-28-2022
03:18 AM
hi @Anji_splunk There will most likely be a log file for custom alert action provided by the Swimlane app inside the _internal index. If you look at: index=_internal | stats values(source) You'll be able to see all of the log files and there should be one relating to the swimlane alert action that will most likely contain a more useful error message so might be worth having a look there. Thanks, Jamie
... View more
06-27-2022
03:07 PM
Hi @glpadilla_sol I believe "An increase in search tier capacity corresponds to increased search load on the indexing tier, requiring scaling of the indexer nodes." will come from the fact that the total number of searches a SH or cluster can execute at anyone time is determined by the number of CPUs the search head and cluster as a whole have. Using the defaults each search head can execute base_max_searches (6) + number of vCPUs on host x max_searches_per_cpu. These settings are from limits.conf. There are further details for controlling percentage of this value assigned to scheduled and real time searches but I'll put ignore that for now because it isn't relevant here. So if you had 3 search heads each with 8 vCPUs you cluster could execute the following at the same time: = 6 + 8 x 1 = 14 searches x 3 SHs = 42 searches Typically each search takes a single vCPU on each indexer (assuming the data is evenly spread and every indexer responds to every search, which we'd expect) so you'd probably want at least 40/50vCPUs (probably more to account for indexing) on your indexers, if the limits above are being reached in your environment, if you'll never hit the 42 searches executing concurrently then you don't need as many cores on the indexers. Now , if you were to increase the vCPUs on the search heads then you would increase the total number of concurrent searches (unless you changed the values in limits.conf above) that can execute in your environment therefore potentially putting more load onto the indexers and if they didn't have enough vCPUs to handle this increase in search load then it could lead to different issues. Regarding memory, I can't see any reason why increasing the memory on the search head increases the load on the indexers. Your search heads might typically use more memory for searches since that is where the reduce stage of the search occurs so searches that use | dedup or similar can be memory heavy and in that case it may make sense to add more memory to your SHs, if you are typically hitting high percentages of memory usage which you could check via the _introspection index. Thanks, Jamie
... View more
06-27-2022
02:53 PM
Hi @iamsplunker If you set TIME_PREFIX to ^ - I assume that means you are asking Splunk to look at the start of the event for the timestamp, however that doesn't seem to be where it is located, try changing the prefix to Date: and increasing the look ahead slightly. Thanks, Jamie
... View more
06-27-2022
02:44 PM
1 Karma
Hi @zacksoft_wf When you use the collect command, Splunk writes the search results to $SPLUNK_HOME/var/spool/splunk directory/ directory on the disk of the host that executed the search. By default Splunk is configured (via $SPLUNK_HOME/etc/system/default/inputs.conf) to monitor this directory for any new files and put the contents through the indexing pipeline similar to a standard file input. The index used in the collect command determines which index the events go to. The source and sourcetype you use in the collect command will determine what props/transforms (field extractions etc.) apply when indexing (e.g. if the sourcetype creates indexed fields etc.) and the same source and sourcetype will have to be used when searching for the events and therefore the search time parsing rules of the associated source and sourcetype are applied which decide the interesting fields on the left hand side. "Will my SPL overwrite any of the existing fields ?" - if you don't change the props / transforms config then nothing will be overwritten in terms of field extractions, it's more likely the filed extractions won't work for the data written to the summary index from the new search won't work, depending on how similar the new data is compared with the data that is written from the existing searches. If the data is different you would typically use different source types with config specific to the format of the data to make sure the data gets parsed correctly. Also, typically the only reason you'd put different types of data into the same index would be if it searched together (i.e. index=myindex sourcetype=mysourcetype OR sourcetype=mysecondsourcetype), if you put a lot of different types of data into the same index it can impact search performance. (I can go into why this is the case in detail if you're interested). Hope this helps, Jamie
... View more
06-27-2022
02:19 PM
1 Karma
Hi @glpadilla_sol The default actually seems to be : [config_change_audit]
disabled = true mode = auto in 8.2.2 from: https://docs.splunk.com/Documentation/Splunk/8.2.2/Admin/Serverconf#Configuration_Change_Audit so you'll need to set that to false to enable it. Thanks, Jamie
... View more
06-27-2022
02:15 PM
Hi @djreschke Do the first 256 bytes of the file happen to be the same on each file in the same input? If so, take a look at the following from (https://docs.splunk.com/Documentation/Splunk/9.0.0/Admin/inputsconf😞 crcSalt = <string>
* Use this setting to force the input to consume files that have matching CRCs,
or cyclic redundancy checks.
* By default, the input only performs CRC checks against the first 256
bytes of a file. This behavior prevents the input from indexing the same
file twice, even though you might have renamed it, as with rolling log
files, for example. Because the CRC is based on only the first
few lines of the file, it is possible for legitimately different files
to have matching CRCs, particularly if they have identical headers.
* If set, <string> is added to the CRC.
* If set to the literal string "<SOURCE>" (including the angle brackets), the
full directory path to the source file is added to the CRC. This ensures
that each file being monitored has a unique CRC. When 'crcSalt' is invoked,
it is usually set to <SOURCE>.
* Be cautious about using this setting with rolling log files; it could lead
to the log file being re-indexed after it has rolled.
* In many situations, 'initCrcLength' can be used to achieve the same goals.
* Default: empty string Thanks, Jamie
... View more