All Topics

Top

All Topics

I have a csv file like this that contain more than 100 numbers   11111111 22222222 33333333   I want to search for events that contain these number. I can use index=* "11111111" OR "22222222" ... See more...
I have a csv file like this that contain more than 100 numbers   11111111 22222222 33333333   I want to search for events that contain these number. I can use index=* "11111111" OR "22222222"  but it take way to long. Is there a faster way? these number does not have a seperate fields or am i searching in any fields. im just searching for any event log that contain these number. Can anyone help? Thanks.  
Hi Team, We are planning to perform a silent installation of the Splunk Universal Forwarder on a Linux client machine. So far, we have created a splunk user on the client machine, downloaded the .t... See more...
Hi Team, We are planning to perform a silent installation of the Splunk Universal Forwarder on a Linux client machine. So far, we have created a splunk user on the client machine, downloaded the .tgz forwarder package, and extracted it to the /opt directory. Currently, the folder /opt/splunkforwarder is created, and its contents are accessible. I have navigated to the /opt/splunkforwarder/bin directory, and now I want to execute a single command to: Agree to the license without prompts, and Set the admin username and password. I found a reference for a similar approach in Windows, where the following command is used: msiexec.exe /i splunkforwarder_x64.msi AGREETOLICENSE=yes SPLUNKUSERNAME=SplunkAdmin SPLUNKPASSWORD=Ch@ng3d! /quiet However, I couldn't find a single equivalent command for Linux that accomplishes all these steps together. Could you please provide the exact command to achieve this on Linux?  
Hello I want to see all indexes latest data time. like when did the latest data came to this index.
Hello Splunkers!! I am facing one issue while data getting ingested from DB connect plugin to Splunk. I have mentioned scenarios below. I need your help to fixing it. In DB connect, I obtain this... See more...
Hello Splunkers!! I am facing one issue while data getting ingested from DB connect plugin to Splunk. I have mentioned scenarios below. I need your help to fixing it. In DB connect, I obtain this value at the latest with the STATUS value "FINISHED". However, when the events come into Splunk, getting the values with the STATUS value "RELEASED" without latest timestamp (UPDATED) What I am doing so far: I am using rising column method to get the data into Splunk to avoid duplicate in ingestion.    
Each of the two lookups has URL information. And I queried it like this:   1)  | set diff [| inputlookup test.csv] [| inputlookup test2.csv]   2)  | inputlookup test.csv | join type=oute... See more...
Each of the two lookups has URL information. And I queried it like this:   1)  | set diff [| inputlookup test.csv] [| inputlookup test2.csv]   2)  | inputlookup test.csv | join type=outer url [| inputlookup test2.csv | eval is_test2_log=1] | where isnull(is_test2_log)     The two results are different and the actual correct answer is number 2. In case 1, there are 200 results, in case 2 there are 300 results. I don't know why the 2 results are different. Or even if they are different, shouldn't there be more results from number 1?  
When I search I want to show the top results by a specific field "field1" and also show "field2" and "field3". Problem is some results don't have a "field2", but do contain the other fields. I get di... See more...
When I search I want to show the top results by a specific field "field1" and also show "field2" and "field3". Problem is some results don't have a "field2", but do contain the other fields. I get different results when I search if I include a "field2" in the results. Can I search and return all results weather or not "field2" exists? | top field1 = all possible results | top field1 field2 field3 = only results with all fields What I want is just to show a blank line where "field2" would be on matches that don't have a "field2". Basically make "field2" optional.
Hi, Did anyone come across adding "Oracle Autonomous DB" monitoring using "Wallet" on AppDynamics. Need some help with JDBC string with using a Wallet file. Regards, Vinodh
with respect to the Magic 8 should you always try to include them in the props of your various source types for a data set? I am slightly confused as if this is a best practice why most  pre-configur... See more...
with respect to the Magic 8 should you always try to include them in the props of your various source types for a data set? I am slightly confused as if this is a best practice why most  pre-configured TAs on splunkbase  include the magic 3 or 4 what happened to the rest of them? Is it always a best practice to  include all 8?
Hello, I am currently building correlation searches in ES and I am running into a "searches delayed" issue. some of my searches run every hour, most are every 2 hours, and some every 3, 12 hours. ... See more...
Hello, I am currently building correlation searches in ES and I am running into a "searches delayed" issue. some of my searches run every hour, most are every 2 hours, and some every 3, 12 hours. My time range looks like: Earliest Time: -2h  Latest Time: now cron schedule: 1 */2 * * * for each new search I add +1 to the minute tab of the cron schedule up to 59 and then start over.  so on the next search the schedule would be 2 */2 * * * and so on... is there a more efficient way I should be scheduling searches? Thank you.
HI Team, 2 episodes are generating for the same host and same description for different severities. One is for high and other one is for info. When I checked the correlation search, we have given ... See more...
HI Team, 2 episodes are generating for the same host and same description for different severities. One is for high and other one is for info. When I checked the correlation search, we have given only high and checked the Neap policy as well. under episode information found the severity we had given the same as the first event. Can someone please guide how to avoid the info episode  and how to find the path to configure for info severity Regards, Nagalakshmi 
I downloaded the tutorial data and want to upload it, but I keep getting an error message. Also my system health is showing red and when I click it shows too many issues that has to be resolved. Wher... See more...
I downloaded the tutorial data and want to upload it, but I keep getting an error message. Also my system health is showing red and when I click it shows too many issues that has to be resolved. Where do I begin resolving my issue.   Thanks    
Hello, I have a server configured with three roles: Deployment Server, Console Monitoring, and License Master. However, I am not receiving the internal and audit logs from this server, such as logs ... See more...
Hello, I have a server configured with three roles: Deployment Server, Console Monitoring, and License Master. However, I am not receiving the internal and audit logs from this server, such as logs from the Search Head or Indexers. If you have any solutions to this problem, I would greatly appreciate your help.  
I need to color a Dashboard with 3 types of logs in Red, Blue, Yellow as per ERROR , INFO, WARNING   So far I managed to integrate below  CSS code in the panel (it colors all rows Blue): ... See more...
I need to color a Dashboard with 3 types of logs in Red, Blue, Yellow as per ERROR , INFO, WARNING   So far I managed to integrate below  CSS code in the panel (it colors all rows Blue): <html> <style type="text/css"> .table-colored .table-row, td{ color:blue; } </style> </html> Does anyone know how to add conditions based on CELL Values e.g. if Cell = INFO -> Blue ,if Cell = ERROR -> Red , if Cell = WARNING -> Yellow
Hello Splunker, I hope you all are doing well.   I have tried to deploy the Windows-TA Add-On over my environment [Search Head Cluster + Deployer] [3 Indexer Peer + Indexer Cluster Master] [Deploy... See more...
Hello Splunker, I hope you all are doing well.   I have tried to deploy the Windows-TA Add-On over my environment [Search Head Cluster + Deployer] [3 Indexer Peer + Indexer Cluster Master] [Deployment Server + Universal Forwarder].   I have used the Deployment server to push the inputs.conf to the designated universal forwarder which allocated on the domain controller server and enable the needed. then remove the wmi.conf and inputs.conf from the Windows TA-Add-On, and copy the rest to local folder and used the deployer to push the enhanced Windows TA to the search heads.   As per the below screen from the official doc the indexer is conditional:   Why should push the Add-on to the indexers even if there are an index time field extraction? As i am know the search head cluster will replicate all the knowledge bundle with the indexers so all the KOs will be replicated to the indexers and no need to push them, am i correct? Splunk Add-on for Microsoft Windows  Thanks in advance!!
[UPDATE] Hello everyone, and thanks in advance for your help. I'm very new to this subject so if anything is unclear, i'll try to explain my problem more in details. I'm using spunk 9.2.1, and i re... See more...
[UPDATE] Hello everyone, and thanks in advance for your help. I'm very new to this subject so if anything is unclear, i'll try to explain my problem more in details. I'm using spunk 9.2.1, and i recently observed that my indexer was not indexing logs received. The indexer is in a failure state because my partition $SPLUNK_DB reached the minFreeSpace allowed in server.conf. After further analysis it seems that one of the index _metrics on the partition is saturated with warm buckets (db_*) and taking all the space available. I however have configured all my indexes with the indexes.conf ($SPLUNK_HOME/etc/system/default/indexes.conf)      # index specific defaults maxTotalDataSizeMB = 5000 maxDataSize = 1000 maxMemMB = 5 maxGlobalRawDataSizeMB = 0 maxGlobalDataSizeMB = 0 rotatePeriodInSecs = 30 maxHotIdleSecs = 432000 maxHotSpanSecs = 7776000 maxHotBuckets = auto maxWarmDBCount = 300 frozenTimePeriodInSecs = 188697600 ... # there's more but i might not be able to disclose them or it might not be revelant [_metrics] coldPath = $SPLUNK_DB/_metrics/colddb homePath = $SPLUNK_DB/_metrics/db thawedPath = $SPLUNK_DB/_metrics/thaweddb frozenTimePeriodInSecs = 1209600     From what i understand with this conf applied the index should not exceed 5GB, and when reached the warm/hot buckets should be removed, but it seems that's it's not taken into account in my case.  The indexer work fine after purging the buckets and restarting it, but i don't get why the conf was not applied ? Is there something i didn't get here ? Is there a way to check the "characteristics" of my index once started ? -> Checked, the conf is correctly applied.   If you know anything on this subject please help me  thank you
L.s.,   At our company we have multiple heavy forwarders. Normaly they talk to the central license manager, but for migrtation reason whe have to get them talking to themself. So a forwarder lice... See more...
L.s.,   At our company we have multiple heavy forwarders. Normaly they talk to the central license manager, but for migrtation reason whe have to get them talking to themself. So a forwarder license is in order i think. looks like an easy process. I did below ./splunk edit licenser-groups Forwarder -is_active 1 this will set in /opt/splunk/etc/system/local/server.conf  the settings below: [license] active_group = Forwarder and [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder quota = MAX slaves = * stack_id = forwarder Have to set by myself master_uri = self restart the server and it looks like a go. When i do this on every heavy it will give the error in _internal Duplicate license hash: [FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFD], also present on peer . And Duplicated license situation not fixed in time (72-hour grace period). Disabling peer.. Why?? Is it realy going to disable the forwarders when it uses the forwarder.license? What to do about it, where did i go wrong?   Thanks in advance   greetz  Jari
Hi All, We are on Splunk 9.2 version, and we want to have a custom dashboard as landing page whenever any user logs in, the custom dashboard should appear along with the apps column at the left side... See more...
Hi All, We are on Splunk 9.2 version, and we want to have a custom dashboard as landing page whenever any user logs in, the custom dashboard should appear along with the apps column at the left side. How to achieve it? Right now, this works only for me and that too it lands in the quick links tab. Also, I can have the dashboard as default under navigation menu but it will not show apps column which is on the left side. 
We search thru the logs of switches and there are some logs that are unconcerning if you just have a couple of them like 5 in an hour. But if you have more than 50 in an hour, there is something wron... See more...
We search thru the logs of switches and there are some logs that are unconcerning if you just have a couple of them like 5 in an hour. But if you have more than 50 in an hour, there is something wrong and we want to raise an alarm for that. The problem is, i cannot simply search back in the last hour and show me devices that have more than 50 results, because i would not catch the issue that existed 5h ago. So i am looking into timecharts that do a statistic every hour and then i want to filter out the "charts" that have less than x per slot. What i came up with is this (searching the last 24h): index=network mnemonic=MACFLAP_NOTIF | timechart span=1h usenull=f count by hostname where max in top5 But this does not work as i still get all the timechart slots where i have 0 or less than 50 logs logs. So imagine following data: switch01 08:00-09:00 0 logs switch01 09:00-10:00 8 logs switch01 10:00-11:00 54 logs switch01 11:00-12:00 61 logs switch01 12::00-13:00 42 logs switch02 08:00-09:00 6 logs switch02 09:00-10:00 8 logs switch02 10:00-11:00 33 logs switch02 11:00-12:00 29 logs switch02 12::00-13:00 65 logs So my ideal search would return to me following lines: Time Hostname Results 10:00-11:00 switch01 54 11:00-12:00 switch01 61 12::00-13:00 switch02 65   The time is not that important, im looking more for the results based on the amount of the result.  Any help is appreciated.
HI , I have a user let say USER1 , his account is getting locked everyday , I searched his username on splunk and events are coming from 2 indexes _internal,_audit . How do I check the reason of his ... See more...
HI , I have a user let say USER1 , his account is getting locked everyday , I searched his username on splunk and events are coming from 2 indexes _internal,_audit . How do I check the reason of his locked account.
Just curious to find out if anyone has ever integrated Splunk Cluster with ITSI. Seems to me that SC certainly qualifies as a service with many dependencies, quite a few of which are not covered by ... See more...
Just curious to find out if anyone has ever integrated Splunk Cluster with ITSI. Seems to me that SC certainly qualifies as a service with many dependencies, quite a few of which are not covered by the monitoring console. While comprehensive, it doesn't include many things which SC just assumes are working correctly, primary among these the networks on which everything relies. For instance, if you have flapping port on switch or a configuration problem with a switch or router, this could potentially cause many interesting issues with SC about which MC would have no clue. Anyone ever made any moves along these lines? Charles