All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello! I'm trying to change the timestamp (_time) from Perfmon:CPU before index, to use my Splunk Heavy Forwarder date instead of the original event timestamp. The Perfmon:CPU _raw is: 05/0... See more...
Hello! I'm trying to change the timestamp (_time) from Perfmon:CPU before index, to use my Splunk Heavy Forwarder date instead of the original event timestamp. The Perfmon:CPU _raw is: 05/07/2020 15:46:37.269 -0300 collection=CPU object=Processor counter="% Processor Time" instance=_Total Value=1.887035386881708 My Splunk architecture is: Universal Forwarder -> Heavy Forwarder -> Indexer I have tried the following configurations on my Heavy Forwarder (props.conf): [source::Perfmon...] DATETIME_CONFIG = CURRENT MAX_TIMESTAMP_LOOKAHEAD = 1 [Perfmon:CPU] DATETIME_CONFIG = CURRENT MAX_TIMESTAMP_LOOKAHEAD = 1 [source::Perfmon:CPU] DATETIME_CONFIG = CURRENT MAX_TIMESTAMP_LOOKAHEAD = 1 None of this configurations worked and the _time of Perfmon:CPU events already is the original timestamp (first line of _raw). I also configured a transform to remove the first line of _raw event. Even if the first line is removed, the _time field don't respect DATETIME_CONFIG = CURRENT configuration. Can anyone help me?
Hi Experts, I have 2 Splunk cloud setup one for Europe and other is for USA . How can I give a common layer (SH) in this case ? Regards VG
can we make a field to _time and pass values through earliest / latest or through Time range button ?
I have 2 panels - one with Horseshoe meter and the other with status indicator. Horseshoe meter shows value in percentage, while status indicator shows in number which is detailed. I have aligned bot... See more...
I have 2 panels - one with Horseshoe meter and the other with status indicator. Horseshoe meter shows value in percentage, while status indicator shows in number which is detailed. I have aligned both the panels vertically, but there is some space between both the panels, and when i try to reduce the height of the panel, the horseshoe meter looks small. Also, in that row I have other panels which shows some % utilization of certain metrics. So, I want to reduce the gap between the first horseshoe meter and its corresponding status indicator value panel. I dont want to disturb the height of other panels and also this horseshoe meter panel. How can I reduce the space marked as "waste" in the image. I want to reduce the space which is marked in red vertical arrow ? I have attached an image for better understanding. Please let me know how it can be achieved ?
We have a query running as an input in db_connect. The query itself is successful, (takes about 30 seconds to run) we have our query timeout set to 300 seconds just to ensure it would run. Onc... See more...
We have a query running as an input in db_connect. The query itself is successful, (takes about 30 seconds to run) we have our query timeout set to 300 seconds just to ensure it would run. Once we set up our cron job to run it and store it to our index (index=dbx) we still see no results being saved 10 minutes after the query should have ran via cron. Any insights on what could be happening?
My electric meter sends a number but I want to subtract the current from the number an hour ago, so I can chart the usage for each hour. My search: source="/home/we/plex/movies/meter.elec" _time... See more...
My electric meter sends a number but I want to subtract the current from the number an hour ago, so I can chart the usage for each hour. My search: source="/home/we/plex/movies/meter.elec" _time=* LastConsumptionCount=* | table _time "LastConsumptionCount"
I have json data that can vary greatly in size with the timestamp field coming at the end of each event. I'm able to parse all the timestamps correctly using the config TIME_PREFIX="timestamp":+ ex... See more...
I have json data that can vary greatly in size with the timestamp field coming at the end of each event. I'm able to parse all the timestamps correctly using the config TIME_PREFIX="timestamp":+ except for the events that are very large. My question is, in order to parse the timestamp for the very large events, do I need to add a MAX_TIMESTAMP_LOOKAHEAD? Or if I added a larger TRUNCATE would the TIME_PREFIX config still need the MAX_TIMESTAMP_LOOKAHEAD? props.conf [mysourcetype] CHARSET=UTF-8 INDEXED_EXTRACTIONS=json KV_MODE=none LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true SHOULD_LINEMERGE=true category=Structured description=JavaScript Object Notation format. For more information, visit http://json.org/ disabled=false pulldown_type=true TIME_PREFIX="timestamp":+
My splunk environment is: 1 Search Head 1 Deployment Server (Master Node) 2 Indexers (Cluster) I tried to implement retention policy to delete more than 365days old data in indexer, so i imp... See more...
My splunk environment is: 1 Search Head 1 Deployment Server (Master Node) 2 Indexers (Cluster) I tried to implement retention policy to delete more than 365days old data in indexer, so i implement the sample script for one of the index(_internal). So i first changed the indexes.conf under D:\Splunk\etc\master-apps_cluster\local in Master Node. **Indexes.conf: [_internal] coldToFrozenDir = $SPLUNK_DB/_internal/frozendb coldPath = $SPLUNK_DB/_internal/colddb homePath = $SPLUNK_DB/_internal/db thawedPath = $SPLUNK_DB/_internal/thaweddb frozen time is 365 days frozenTimePeriodInSecs = 31536000 maxHotIdleSecs = 3600 repFactor = auto***** After same changed the indexes.conf under D:\Splunk\etc\slave-apps_cluster\local in indexer. **Indexes.conf: [_internal] coldToFrozenDir = $SPLUNK_DB/_internal/frozendb coldPath = $SPLUNK_DB/_internal/colddb homePath = $SPLUNK_DB/_internal/db thawedPath = $SPLUNK_DB/_internal/thaweddb frozen time is 365 days frozenTimePeriodInSecs = 31536000 maxHotIdleSecs = 3600 repFactor = auto***** For doing changes, in master node GUI under settings > indexer cluster > push button. After doing this from more than 3 hours its still (Bundle reload is in progress. Waiting for all peers to return the status.) what is the time taken by doing this. Please any one help on priority.
Hello everyone, I am trying to configure a connection for MSSQL version 2008, however the DB connect message is Connection is invalid / (No detail). I have already tested the account and it wor... See more...
Hello everyone, I am trying to configure a connection for MSSQL version 2008, however the DB connect message is Connection is invalid / (No detail). I have already tested the account and it works in SQL Express, since splunk I have open the port by telnet. Splunk version: 7.3.3 Splunk DB Connect: 3.1.4 (I have tried with 3.3.1) Driver: sqljdbc_4.2 and sqljdbc_7.2 Thanks for your help
Hi, I have one forwarederand one (IDX+SH). I am ingesting few hundreads of .txt files from HWF to one specific index. Now I want to know what log file size actually it has indexed per day vs what ... See more...
Hi, I have one forwarederand one (IDX+SH). I am ingesting few hundreads of .txt files from HWF to one specific index. Now I want to know what log file size actually it has indexed per day vs what license volume it consumed per day for particular index. Can any one help me with spl query? and is it possible to get actual .txt file size from splunk? Thanks,
When installing the universal forward into a trusted domain, do I need to add account from domain A into domain B? The instructions for universal forward is saying a domain user account is needed so... See more...
When installing the universal forward into a trusted domain, do I need to add account from domain A into domain B? The instructions for universal forward is saying a domain user account is needed so I am trying to figure out what domain account to use.
Hi community! I'm using Splunk Entreprise to create dashboards with my client's ServiceNow incident information. My company only look at tickets from assignment_group A. So, I have a ticket... See more...
Hi community! I'm using Splunk Entreprise to create dashboards with my client's ServiceNow incident information. My company only look at tickets from assignment_group A. So, I have a ticket X that belongs to assignment_group A with Status "New". However, this ticket changed to assignment_group B and is no longer serviced by my company. This will result in a second ServiceNow extraction, that ticket will not appear. So, I need to create a logic so that when this happens, Splunk changes the Status of ticket X to "Reassigned". Does anyone know how to do this? Thanks!
Users are unable to access data from a dashboard. We are using a datamodel to create that dashboard. We have enable read access for this dashboard and datamodel but not to the raw data index. Please ... See more...
Users are unable to access data from a dashboard. We are using a datamodel to create that dashboard. We have enable read access for this dashboard and datamodel but not to the raw data index. Please help me to provide data access for dashboard to user without giving access for that index (raw data). Thanks in advance!!
Hi, I'm able to get the count for NumberFormatException but the NumberFormatException is displayed in separate column. Please verify the screenshot. You will be able to understand. I need to mo... See more...
Hi, I'm able to get the count for NumberFormatException but the NumberFormatException is displayed in separate column. Please verify the screenshot. You will be able to understand. I need to move the NumberFormatException and its count under Technical error. Could you please help me on this.
I have a Splunk table with three columns with headings. I want to know how to add drill down/make one column only(first column) clickable and pass that value to all other panels. Any help would b... See more...
I have a Splunk table with three columns with headings. I want to know how to add drill down/make one column only(first column) clickable and pass that value to all other panels. Any help would be appreciated. A B C 1 2 3 4 5 6 make column A as clickable only. user should be able to click only on 1 and 4 at a time and same shall be passed on the dashboard.
Hi team, what is the difference between perform data and the data provided by the windows add on???
Hello, I see this app only supports 6.6. Is there a chance this could be updated to support 8.x?
Hi All, I m facing an issue while calculating summation in timechart for the span of 5mins in Single valued Visualization. I wanted to display the sum of the data came in last 5 mins at the end of ... See more...
Hi All, I m facing an issue while calculating summation in timechart for the span of 5mins in Single valued Visualization. I wanted to display the sum of the data came in last 5 mins at the end of the window of 5 mins instead at start. For example, 07/05/2020 07:05 34 07/05/2020 07:06 38 07/05/2020 07:08 10 07/05/2020 07:09 85 07/05/2020 07:10 43 07/05/2020 07:11 12 Here, i want the sum after 7:05 till 7:10 to be displayed at 7:10 instead of 7:05, as 176 at 7:10 instead of 167 at 7:05. Currently, i m using following query: index=.... earliest=-24h | timechart sum(count) as Volume span=5m | fillnull value=0 Thanks
Hi, I have a query which gives me in_usage and out_usage for a device per metric bla bla ...| table Device metric_name "in_usage%" "out_usage%" Now i want to display the top20 in_usage ... See more...
Hi, I have a query which gives me in_usage and out_usage for a device per metric bla bla ...| table Device metric_name "in_usage%" "out_usage%" Now i want to display the top20 in_usage in one line in the line chart graph and out_usage in another line in the same graph by Device and metric_name. How to do that am little confused. I tried using below at the end but i dont want to explicitly sort it by out it should be sorted by both in/out | stats values("in_usage%") as IN_usage values("out_usage%") as OUT_usage by Device metric_name | sort - OUT_usage |head 20 Please help
Hi team, I have installed UF and add on for windows and getting server data to my splunk instance..... are there any use cases on monitoring and forecast predicting using MLTK for this data...?? ... See more...
Hi team, I have installed UF and add on for windows and getting server data to my splunk instance..... are there any use cases on monitoring and forecast predicting using MLTK for this data...?? this is the server data..... and the data is generated by add on windows....