All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Dear experts According to the documentation after stats, I have only the fields left used during stats.  | table importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode ... See more...
Dear experts According to the documentation after stats, I have only the fields left used during stats.  | table importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode | where stoerCode IN ("K02") | stats count as periodCount by zbpIdentifier | sort -periodCount | head 10 | fields zbpIdentifier zbpIdentifier_bp periodCount importZeit_uF To explain in detail: After table the following fields are available:  importZeit_uF zbpIdentifier bpKurzName zbpIdentifier_bp status stoerCode After stats count there are only  zbpIdentifier periodCount left. Question:  How to change the code above to get the count, and have all fields available as before? Thank you for your support.   
If the data is same as before, but the presentation is different then there is something different in the settings now. Use the btool command (part of the Admin's Little Helper app - a mandatory app... See more...
If the data is same as before, but the presentation is different then there is something different in the settings now. Use the btool command (part of the Admin's Little Helper app - a mandatory app for Splunk Cloud customers, IMO) to review the settings to make sure they are being applied as expected.
Thanks for the response. Everything was migrated over and is exactly this same as before. You would think there would be a toggle to always use highlighted syntax since it's already parsing JSON..
Have you migrated/moved those original props.conf from onprem to cloud? If you still have those somewhere just create an app from those and install it into cloud. Of course you must ensure that those... See more...
Have you migrated/moved those original props.conf from onprem to cloud? If you still have those somewhere just create an app from those and install it into cloud. Of course you must ensure that those have precedence over current configuration in cloud.
Timestamps are extracted before INGEST_EVAL is performed, so you'll need to use the := operator to replace _time. These props should work better than those shown. [QWERTY] # better performance with... See more...
Timestamps are extracted before INGEST_EVAL is performed, so you'll need to use the := operator to replace _time. These props should work better than those shown. [QWERTY] # better performance with LINE_BREAKER SHOULD_LINEMERGE = false # We're not breaking events before a date BREAK_ONLY_BEFORE_DATE = false # Break events after newline and before "~01~" LINE_BREAKER = ([\r\n]+)~\d\d~ MAX_TIMESTAMP_LOOKAHEAD = 43 TIME_FORMAT = %Y%m%d-%H%M%S;%3N # Skip to the second timestamp (after the milliseconds of the first TS) TIME_PREFIX = ;\d{3}~ MAX_DAYS_AGO = 10951 REPORT-1 = some-report-1 REPORT-2 = some-report-2
When I click on the raw log and back out of it it shows up as highlighted. How do I default the sourcetype/source to always show as highlighted? I've messed with the props.conf and can't get it. T... See more...
When I click on the raw log and back out of it it shows up as highlighted. How do I default the sourcetype/source to always show as highlighted? I've messed with the props.conf and can't get it. This only started occur after we migrated from On-Prem Splunk to Splunk Cloud. Before, these logs would automatically show up/parsed in JSON
The WARNING text can be ignored safely.  The rest is normal.
I see that INGEST_EVAL allows for the use of conditionals. Thank you very much, I'll give that a try.
Hi, for Splunk search head component I have copied the configuration files from old Virtual Machine box to New Physical server box .  After which I started the Splunk services in new physical b... See more...
Hi, for Splunk search head component I have copied the configuration files from old Virtual Machine box to New Physical server box .  After which I started the Splunk services in new physical box, the Web UI is not loading up and getting the below message Waiting for web server at https://127.0.0.1:8000 to be available................WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. . Done
Probably you could do this with INGEST_EVAL? Just test if you can do this with "eval ......" in one line adding several those one after one with suitable if etc. If/when you get it working in SPL then... See more...
Probably you could do this with INGEST_EVAL? Just test if you can do this with "eval ......" in one line adding several those one after one with suitable if etc. If/when you get it working in SPL then just copy that into transforms.conf into one INGEST_EVAL expression.
@Eldemallawy  In Splunk, EPS (Events Per Second) is a metric used to measure the rate at which events are ingested into the Splunk indexer. The formula to calculate EPS is relatively straightforward... See more...
@Eldemallawy  In Splunk, EPS (Events Per Second) is a metric used to measure the rate at which events are ingested into the Splunk indexer. The formula to calculate EPS is relatively straightforward: EPS = (Total Number of Events) / (Time Duration in Seconds) To calculate EPS, you need to count the total number of events that were indexed within a specific time duration (usually one second) and then divide that count by the duration in seconds. For example, if you want to calculate the EPS over a 1-minute window (60 seconds) and you have indexed 3,000 events during that time: EPS = 3000 / 60 = 50 events per second. This means you are indexing, on average, 50 events per second during that 1-minute period.
Many thanks @kiran_panchavat, much appreciated. Cheers, Ahmed.
@Eldemallawy  1. Identify and prioritize the data types within the environment. 2. Install the free license version of Splunk. 3. Take the highest priority data type and start ingesting its data ... See more...
@Eldemallawy  1. Identify and prioritize the data types within the environment. 2. Install the free license version of Splunk. 3. Take the highest priority data type and start ingesting its data into Splunk, making sure to start adding servers/devices slowly so the data volume does not exceed the license.  If data volumes are too high, pick a couple of servers/devices from the different types, areas, or locations to get a good representation of the servers/devices. 4. Review the data to ensure that the correct data is coming in. If there is unnecessary data being ingested, that data can be dropped to further optimize the Splunk implementation. 5. Make any adjustments to the Splunk configurations needed, and then watch the data volume over the next week to see the high, low, and average size of the data per server/device. 6. Take these numbers and calculate them against the total number of servers/devices to find the total data volume for this data type 7. Repeat this process for the other data types listed until you are completed.
@Eldemallawy  Estimating the Splunk data volume within an environment is not an easy task due to several factors: number of devices, logging level set on devices, data types collected per device, u... See more...
@Eldemallawy  Estimating the Splunk data volume within an environment is not an easy task due to several factors: number of devices, logging level set on devices, data types collected per device, user levels on devices, load volumes on devices, volatility of all data sources, not knowing what the end logging level will be, not knowing which events can be discarded.   Estimate Indexing Volume 1. Verify raw log sizes. 2. Daily, Peak, retained, future volume. 3. Total number of data sources and hosts. 4. Add volume estimates to data source inventory/spreadsheet. Estimate index volume size: 1. For syslog type data, index occupies ~50% of original size. 2. 15% of raw data ( compression ) 3. 35% for associated index files.
I would respectfully disagree.  Ours is an observability platform.  And to turn on anomaly detection and predictive analyst.  We need a constant flow of 90 days metrics data.  This is not a static nu... See more...
I would respectfully disagree.  Ours is an observability platform.  And to turn on anomaly detection and predictive analyst.  We need a constant flow of 90 days metrics data.  This is not a static number as customers add and remove CI's from their platforms.   We are also an MSP with 100's of customers.  And for data solvency we create many indexes per customer.  Typically each of our platforms has between 200-300 customer indexes. I am trying to automate out the TIOL of constantly having to review and update the config to ensure we keep the right data ingestion to feed and water the anomaly detection and predictive analyst.   With out over provisioning our storage.
Thanks Kiran,   I was looking for a way to estimate the volume of data that will be ingested into Splunk before installing it. This will help me calculate the License cost.    Therefore, is there... See more...
Thanks Kiran,   I was looking for a way to estimate the volume of data that will be ingested into Splunk before installing it. This will help me calculate the License cost.    Therefore, is there a way to estimate the volume of DNA-C metrics based on number of LAN / WLAN devices?   Cheers, Ahmed.
My configuration has not changed. I have verified that buckets are being created, and I have verified that a hot_quar_v1 bucket is being created. Why is it being created and how do I remove it?
You have earliest and latest explicitly set in your searches so they should be used. Again - check the job inspect and job logs to see what searches are being run and how. (The search contents, the t... See more...
You have earliest and latest explicitly set in your searches so they should be used. Again - check the job inspect and job logs to see what searches are being run and how. (The search contents, the time ranges, the expanded subsearch...).
Ok. So you have a very unusual border case. Typically systems need to handle constant amount of storage and that's what Splunk does.
I have an event like this:   ~01~20241009-100922;899~19700101-000029;578~ASDF~QWER~YXCV   There are two timestamps in this. I have setup my stanza to extract the second one. But in this particula... See more...
I have an event like this:   ~01~20241009-100922;899~19700101-000029;578~ASDF~QWER~YXCV   There are two timestamps in this. I have setup my stanza to extract the second one. But in this particular case, the second one is what I consider "bad". For the record, here is my props.conf:   [QWERTY] SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = true MAX_TIMESTAMP_LOOKAHEAD = 43 TIME_FORMAT = %Y%m%d-%H%M%S;%3N TIME_PREFIX = ^\#\d{2}\#.{0,19}\# MAX_DAYS_AGO = 10951 REPORT-1 = some-report-1 REPORT-2 = some-report-2   The consequence of this seems to be that splunk indexes the entire file as a single event, which is something i absolutely want to avoid. Also, I do need to use linemerging as the same file may contain xml dumps. So what I need is something that implements the following logic:   if second_timestamp_is_bad: extract_first_timestamp() else: extract_second_timestamp()   Any tips / hints on how to mitigate this scenario using only options / functionality provided by splunk are greatly appreciated.