All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We are setting colours of charts from our company standards but this seems to have broken since friday, we think it may be browser or html updates rather than splunk Example code we use is /* CH... See more...
We are setting colours of charts from our company standards but this seems to have broken since friday, we think it may be browser or html updates rather than splunk Example code we use is /* CHART COLOURS FOR LEGEND */ .highcharts-legend .highcharts-series-0 .highcharts-point{ fill:#28a197; } .highcharts-legend .highcharts-series-1 .highcharts-point{ fill:#f47738; } .highcharts-legend .highcharts-series-2 .highcharts-point{ fill:#6f72af; } /*BAR CHART FILL AREA */ .highcharts-series-0 .highcharts-tracker-area { fill:#28a197; stroke:#28a197;} .highcharts-series-1 .highcharts-tracker-area { fill:#f47738; stroke: #f47738;} .highcharts-series-2 .highcharts-tracker-area { fill:#6f72af; stroke: #6f72af;} /* PIE CHART COLOURS */ .highcharts-color-0 { fill: #28a197 ; } .highcharts-color-1 { fill: #f47738; } .highcharts-color-2 { fill: #6f72af; } Bar charts broke first and we found if we replaced .highcharts-tracker-area with .highcharts-point then it fixed the bars but then allowed pie charts to be only one colour
Whether it takes long to search it depends on your data. If these are really long and fairly unique terms, they can be (relatively) quickly searchable provided that you're looking strictly for those ... See more...
Whether it takes long to search it depends on your data. If these are really long and fairly unique terms, they can be (relatively) quickly searchable provided that you're looking strictly for those terms, not some wildcarded variations (especially with wildcard not at the end of the search term).
It's not about a field but more about the general layout and variability of data in your DB. Splunk works differently - once you ingest an event, it's immutable whereas the contents of a particular r... See more...
It's not about a field but more about the general layout and variability of data in your DB. Splunk works differently - once you ingest an event, it's immutable whereas the contents of a particular row in DB can change. So regardless of how you decide that one row of your results has already been ingested, it won't be ingested again even if some "secondary" fields change their values. I don't know your data, I don't know what it represents. If you reconfigure your DB data onboarding process to ingest both states of your DB record (or whatever result set you're getting), you'll have in Splunk two separate  partly duplicated events and will have to handle it somehow in search-time.
Okay. Could you check/verify if you use the Distributed Monitoring Console and if the affected HFs are configured as Indexer under Settings --> Monitoring Console --> Settings --> General Setup?  Th... See more...
Okay. Could you check/verify if you use the Distributed Monitoring Console and if the affected HFs are configured as Indexer under Settings --> Monitoring Console --> Settings --> General Setup?  That could be the reason why the HeavyForwarder are configured as distributed search peers to monitor them in the DMC. So if the license manager on the same instance as the DMC is check the config files for the affected HFs and may remove them.
Can you please help to share full steps and path you updated to fix this issue?
I have a heavy's without master_uri and Manager_uri. They are luckely working okay besides the error. In etc/licenses is only download-trial folder. No forwarder.license
Hi @Crotyo , you should put the csv file in a lookup (called e.g. "my_lookup.csv", containing at least one field (e.g. "my_field") and then run a search like the following: index=* [ | inputlookup ... See more...
Hi @Crotyo , you should put the csv file in a lookup (called e.g. "my_lookup.csv", containing at least one field (e.g. "my_field") and then run a search like the following: index=* [ | inputlookup my_lookup.csv | rename my_field AS query | fields query ] | ... in this way you perform a search in full text search mode on all the events. Ciao. Giuseppe
I have a csv file like this that contain more than 100 numbers   11111111 22222222 33333333   I want to search for events that contain these number. I can use index=* "11111111" OR "22222222" ... See more...
I have a csv file like this that contain more than 100 numbers   11111111 22222222 33333333   I want to search for events that contain these number. I can use index=* "11111111" OR "22222222"  but it take way to long. Is there a faster way? these number does not have a seperate fields or am i searching in any fields. im just searching for any event log that contain these number. Can anyone help? Thanks.  
Okay, just to confirm master_uri and manager_uri is not set on the HF, right? Could you check what files are located under etc/licenses? 
Hi Uma You just have to create a metric per Token and use a query like this SELECT toInt(expirationDateTime- eventTimestamp) AS "Seconds" which will give you the difference in seconds between the... See more...
Hi Uma You just have to create a metric per Token and use a query like this SELECT toInt(expirationDateTime- eventTimestamp) AS "Seconds" which will give you the difference in seconds between the dates, you can then further multiply the seconds to get minutes/hours or days if you want to rather use that. This will give you the metric to tell you how much seconds/minutes/hours/days to expiry and you can then alert on it Ciao
Hello @Mandar.Kadam , Can you share the solution you got from the support? Regards, Amit Singh Bisht
Thats the answer, thanks!
To accept the license during the start, execute: opt/splunkforwarder/splunk start --accept-license --answer-yes and before you start the forwarder service I suggest to create a user-seed.conf to se... See more...
To accept the license during the start, execute: opt/splunkforwarder/splunk start --accept-license --answer-yes and before you start the forwarder service I suggest to create a user-seed.conf to set the admin password in clear text on the CLI. user-seed.conf must be stored in /opt/splunkforwarder/etc/system/local/ [user_info] USERNAME = admin PASSWORD = YourPassword  another method is to hash the password and add the hash to the user-seed.conf. It is described in the following doc Create secure administrator credentials - Splunk Documentation
Thanks for the response Paul.. I removed the master_uri. I can understand why, it is now manager_uri see below: /opt/splunk/etc/system/local/server.conf [license] /opt/splunk/etc/system/local/serv... See more...
Thanks for the response Paul.. I removed the master_uri. I can understand why, it is now manager_uri see below: /opt/splunk/etc/system/local/server.conf [license] /opt/splunk/etc/system/local/server.conf active_group = Forwarder /opt/splunk/etc/system/default/server.conf connection_timeout = 30 /opt/splunk/etc/system/default/server.conf manager_uri = self /opt/splunk/etc/system/default/server.conf receive_timeout = 30 /opt/splunk/etc/system/default/server.conf report_interval = 1m /opt/splunk/etc/system/default/server.conf send_timeout = 30 /opt/splunk/etc/system/default/server.conf squash_threshold = 2000 /opt/splunk/etc/system/default/server.conf strict_pool_quota = true I did something else, and that is remove the heavy's from the distributed search peers. Why they where there i don't know. It resolved one thing, the warning about disabling the peer...   The only thing remaining is the duplicate license hash (ffffff...) in the _internal index. I can understand the hash itself. Every forwaredr with this license has this hash. What i don't understand is why this warnimng. And it is only the warning for the heavy's  which were in the distributed serach peers. Not the one's which were not in that list. It seems something remained  someweher and keeps looking to the license on these heavy's and keeps reporting it is the same license...   Any idea?
No matter how many times you ask in different posts, there currently isn't any easy way to do what you are asking for.
Hi @PickleRick, If I replace the TASKID column with UPDATED column to rising column method, will it make a difference?  FYI : I also increased the checkpoint value from 1 to 2 and even after the... See more...
Hi @PickleRick, If I replace the TASKID column with UPDATED column to rising column method, will it make a difference?  FYI : I also increased the checkpoint value from 1 to 2 and even after the second time STATUS change is RELEASED to FINISHED, that row is not ingested in splunk.
Hi Team, We are planning to perform a silent installation of the Splunk Universal Forwarder on a Linux client machine. So far, we have created a splunk user on the client machine, downloaded the .t... See more...
Hi Team, We are planning to perform a silent installation of the Splunk Universal Forwarder on a Linux client machine. So far, we have created a splunk user on the client machine, downloaded the .tgz forwarder package, and extracted it to the /opt directory. Currently, the folder /opt/splunkforwarder is created, and its contents are accessible. I have navigated to the /opt/splunkforwarder/bin directory, and now I want to execute a single command to: Agree to the license without prompts, and Set the admin username and password. I found a reference for a similar approach in Windows, where the following command is used: msiexec.exe /i splunkforwarder_x64.msi AGREETOLICENSE=yes SPLUNKUSERNAME=SplunkAdmin SPLUNKPASSWORD=Ch@ng3d! /quiet However, I couldn't find a single equivalent command for Linux that accomplishes all these steps together. Could you please provide the exact command to achieve this on Linux?  
Most probably your DB query initially returned one status which got ingested from the input but later something within your DB changed the status. But since the TASKID is the primary identifier for t... See more...
Most probably your DB query initially returned one status which got ingested from the input but later something within your DB changed the status. But since the TASKID is the primary identifier for the ingested records, the same TASKID will not be ingested again. Hence the discrepancy between the DB contents and the indexed data.
Ah, so your problem was actually _not_ the same as the original one. That's why there is rarely a point to digging out old threads
thanks but this colors the background of the cell i need to color the font only