All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi. I have a business requirement where I need to index data from multiple of our vendors that also use Splunk. The vendors have added a _TCP_ROUTING to send data to both our Heavy Forwarders and t... See more...
Hi. I have a business requirement where I need to index data from multiple of our vendors that also use Splunk. The vendors have added a _TCP_ROUTING to send data to both our Heavy Forwarders and their own infrastructure.   I have a dedicated port for each vendor in my inputs.conf on the Heavy Forwarder: [splunktcp-ssl:9997] disabled = 0 _meta userindex::splunk_test My idea was to have a different userindex for each input stanza Next step is a generic props.conf: [host::*] TRANSFORMS-force_index = force_index   Finally I was hoping it would be possible to do the magic in my transforms.conf: [force_index] DEST_KEY = MetaData:Sourcetype REGEX = (.+) FORMAT = $1 SOURCE_KEY = _meta:userindex WRITE_META = true I know I'm not rewriting the index, but it is easier to look at the sourcetype, as the events get indexed and it should be a small change to rewrite the index instead of the sourcetype. Long story... so to the question. Is it possible to reference the _meta variable I have set in the input stanza in the regex of the transform on the same Heavy Forwarder?   Kind regards Lars   P.S. I agree it is a bad idea to rewrite the index, it should be set at the source, but I think it is necessary, as our indexes do not match those of our vendors and I want each vendors data to be indexed in the same index.
For the following search command, what is the expected output?   | makeresults | eval text_string = "I:red_heart:Splunk" | eval text_split = split(text_string, "")    I would expect a text_split ... See more...
For the following search command, what is the expected output?   | makeresults | eval text_string = "I:red_heart:Splunk" | eval text_split = split(text_string, "")    I would expect a text_split field that either contains an array like this: text_split == [ 'I', ' ', 'S', 'p', 'l', 'u', 'n', 'k' ]  or if  split by byte, potentially dependent on the locale: text_split == [ 'I', 'â', '', '¤', 'ï', '¸', '¿', 'S', 'p', 'l', 'u', 'n', 'k' ] But not the current output, were the data : text_split == [ 'I', '�', '�', '�', '�', '�', '�', 'S', 'p', 'l', 'u', 'n', 'k' ] The use of characters that aren't fixed width also screws up search entry highlighting and text selection, but that isn't related to the split function.   | eval text_string = "I:red_heart:Splunk" `comment("Try highlighting a word in this comment in the SPL Editor")`    It looks like mvjoin() reverses the split(), but mvcombine fails. (edit attempt failed to add the red heart back to the code samples; replaced with :red_heart:)
I am trying to compare the current date with the lastInformTime I have tried | eval but nothing seems to work.  index="device_list" pppUsername=* provRecordStatus=Succeeded | eval timenow=now() | ... See more...
I am trying to compare the current date with the lastInformTime I have tried | eval but nothing seems to work.  index="device_list" pppUsername=* provRecordStatus=Succeeded | eval timenow=now() | spath lastInformTime | search lastInformTime>=timenow | dedup macAddress, serialNumber | table ipAddress, serialNumber, lastInformTime, pppUsername, macAddress The _time that is brought in during the import does not compare with any date in the export. I am not sure where Splunk is getting it from. Is there a way to set the _time to the lastInformTime? TIA
The Splunk Docs have this example under timechart Example 3: Show the source series count of INFO events, but only where the total number of events is larger than 100. All other series values will b... See more...
The Splunk Docs have this example under timechart Example 3: Show the source series count of INFO events, but only where the total number of events is larger than 100. All other series values will be labeled as "other". index=_internal | timechart span=1h sum(eval(if(log_level=="INFO",1,0))) by source WHERE sum > 100 In my own search, I'm trying to just show "where max in top5 " (or I could alternatively use "where max > 20000") but either way the results always contain the "OTHER" series for the rest of the results after the top 5 series. So you might get: ---Series 1 ---Series 2 ---Series 3 ---OTHER I'd like to exclude OTHER and I've tried limit=0 and limit=5 but I believe the limit option is ignored when a where clause is used.  Does anyone have any ideas how I could work around this?
Hi all,   I've configured a universal forwarder on Windows server to monitor a folder with csv files. These files are logs from our mail relay system, so they are being written regularly. I can s... See more...
Hi all,   I've configured a universal forwarder on Windows server to monitor a folder with csv files. These files are logs from our mail relay system, so they are being written regularly. I can see the files in my Splunk Search head, but only the title of the columns, not the data itself I've configured the sourcetype as CSV, added crcSalt=<SOURCE> to the inputs configuration on the Windows Server.   Does anyone have any idea why I'm only getting the headers?   Thanks all
The Web datamodel contains negative values for bytes ingested from Umbrella proxylogs below is the query that we are using for the search | tstats `summariesonly` sum(Web.bytes_out) as size_out, su... See more...
The Web datamodel contains negative values for bytes ingested from Umbrella proxylogs below is the query that we are using for the search | tstats `summariesonly` sum(Web.bytes_out) as size_out, sum(Web.bytes_in) as size_in, values(Web.http_method) as method from datamodel=Web.Web by Web.user,Web.url, _time span=1d | `drop_dm_object_name("Web")`
Hi,  We are using the splunk app for infrastructure. We are getting events on servier side using collectd. We want to get the memory used , free etc for specific mount point. We are using df plugin ... See more...
Hi,  We are using the splunk app for infrastructure. We are getting events on servier side using collectd. We want to get the memory used , free etc for specific mount point. We are using df plugin for that.  But we are not getting the same data as we are getting on server side that specific mount point(/abc/def). below is the file content <Plugin df> Device "/abc/def" MountPoint "/" ReportByDevice false ValuesAbsolute false ValuesPercentage true IgnoreSelected true </Plugin> Can anyone please help me with that.
Hi all, I have problem getting Splunk to connect to MongoDB. Below are what i have done so far: 1. Download the driver from https://raw.githubusercontent.com/michaelloliveira/traccar-mongodb/master... See more...
Hi all, I have problem getting Splunk to connect to MongoDB. Below are what i have done so far: 1. Download the driver from https://raw.githubusercontent.com/michaelloliveira/traccar-mongodb/master/lib/mongodb_unityjdbc_full... and copy it to drivers folder of dbconnect app 2. copy db_connection_types.conf from default to local folder, and add following line [mongo] displayName= MongoDB jdbcDriverClass = mongodb.jdbc.MongoDriver serviceClass = com.splunk.dbx2.DefaultDBX2JDBC jdbcUrlFormat = jdbc:mongo://<host>:<port>/<database> #jdbcUrlSSLFormat = jdbc:mongo://<host>:<port>/<database> useConnectionPool = false port = 27017 testQuery = SELECT 1 [mongo2] displayName= MongoDB2 serviceClass = com.splunk.dbx2.DefaultDBX2JDBC jdbcUrlFormat = jdbc:mongodb://<host>:<port>/<database> #jdbcUrlSSLFormat = jdbc:mongodb://<host>:<port>/<database> jdbcDriverClass = mongodb.jdbc.MongoDriver port = 27017 3. Create identity for the connection. This account has readWrite permission on the database and tested working. 4. Restart Splunk. I can see driver can be detected for both MongoDB connection types 5. Create connection that use either MongoDB or MongoDB2. Both failed Error for MongoDB: not authorized for query on Test._schema Error for MongoDB2: No suitable driver found for jdbc:mongodb://si0vmc3134.de.bosch.com:30000/Test   Anyone know what's wrong here? Thanks
I'm new to Splunk Enterprise, I want to trigger a real-time alert if failed attempt login is more than 3 times every or within 5 minutes, for example, 01:00 PM until 01:05 PM there is a failed login ... See more...
I'm new to Splunk Enterprise, I want to trigger a real-time alert if failed attempt login is more than 3 times every or within 5 minutes, for example, 01:00 PM until 01:05 PM there is a failed login attempt more than 3 times, so alert will show. this is my question: A. What is the best type of alert for this case? Real-time rolling window ? or Real-time per result? B. What is the best practice to specify the trigger condition for this case (according to question A)? Is it in search command? Or in alert form custom trigger condition?  
Hi I am new to DB connect 3.3.1. I have it working for Sybase, but can't get it going for oracle. I am trying to connect to Oracle DB 18c. I have installed the drivers from oracle, but I am gettin... See more...
Hi I am new to DB connect 3.3.1. I have it working for Sybase, but can't get it going for oracle. I am trying to connect to Oracle DB 18c. I have installed the drivers from oracle, but I am getting the following errors. This is the configuration I was trying to use (below), however, I am getting an error in the logs saying  "There was an error processing your request. It has been logged (ID cda8c0139c1f536e)." I have looked in the logs and I can see the following. not sure if related? 07-02-2020 11:20:40.681 +0200 WARN AuthorizationManager - Unknown role 'dbx_user' [OCBC_PRTC] connection_type = oracle customizedJdbcUrl = jdbc:oracle:thin:@//dell1048srv@murex.com:1521/OCBC_PERFORMANCE_31_MX database = OCBC_PERFORMANCE_31_MX disabled = 0 host = DELL1048SRV identity = OCBC_PERFORMANCE_31_MX jdbcUseSSL = false localTimezoneConversionEnabled = false port = 1521 readonly = true timezone = Europe/Paris   Any help would be great thanks
I got a requirement where i want to stop the alerts to be triggering during these two maintenance window from July 2 2020 8 PM to July 6 2020 6 AM.
Hi, I am working on a project where we will be monitoring the windows backup logs from all our servers. The idea is to create a splunk alert whenever there are backup process that did not start, or ... See more...
Hi, I am working on a project where we will be monitoring the windows backup logs from all our servers. The idea is to create a splunk alert whenever there are backup process that did not start, or have started but not finished, or have started but failed. If this alert is triggered, an email will be sent to admin with the list of servers that met the condition. So far, I have sourced out the event ID's from the windows backup logs that I needed for the search; EventCode=1 - Windows backup started EventCode=4 - Backup Successful This can be easily done by creating an alert that searches the eventcodes from a single server and triggers if there are no result. Now my problem is that we have at least 12 servers. Does this mean that i have to create an alert item for each server? - or is there any easier way to do this with just one alert item? or is there an app/addon that easily does this? Thanks in advance for any suggestions.  
Hi, can you help me to solve this problem, please? I have index=index1 In a specified time range, e.g. 3 hours, I have these events. Time is a regular time point, where the electric power has been ... See more...
Hi, can you help me to solve this problem, please? I have index=index1 In a specified time range, e.g. 3 hours, I have these events. Time is a regular time point, where the electric power has been measured. ID is the name of the electrical counter, which counts the electrical measurements. Value is the measured electrical power [kW].  Time ID Value 02.07.2020 06:00:00 counter1 1000 02.07.2020 06:00:00 counter2 2000 02.07.2020 06:00:00 counter3 3000 02.07.2020 07:00:00 counter1 2000 02.07.2020 07:00:00 counter2 3000 02.07.2020 07:00:00 counter3 4000 02.07.2020 08:00:00 counter1 3000 02.07.2020 08:00:00 counter2 4000 02.07.2020 08:00:00 counter3 5000 How can I count the consumption of each counter in this time range? I need this output ID consumption counter1 2000 counter2 2000 counter3 2000   Thank you
Hi All, As indicated here (https://community.splunk.com/t5/Getting-Data-In/Why-am-I-unable-to-monitor-SPLUNK-HOME-var-log-splunk-audit-log/m-p/506185#M86203), I have been able to get the audit.log f... See more...
Hi All, As indicated here (https://community.splunk.com/t5/Getting-Data-In/Why-am-I-unable-to-monitor-SPLUNK-HOME-var-log-splunk-audit-log/m-p/506185#M86203), I have been able to get the audit.log from our Universal Forwarders with audittrail sourcetype. Unfortunately sometimes those events read from $SPLUNK_HOME/var/log/splunk/audit.log are merged in a unique event (even if each event is in a new line and starts with a timestamp). In our deployment we have Universal Forwarders sending data to Heavy Forwarders that then send them to Indexers: UF --> HF --> IDX What I tried to do is to deploy a props.conf on the HF to indicate the following:   [audittrail] SHOULD_LINEMERGE = false SEDCMD = s/\d{2}-\d{2}-\d{4} \d{2}:\d{2}:\d{2}\.\d{3}.* INFO AuditLogger - //g   But even the SEDCMD is not applied. And I can see with the following command that the configuration are properly read in the HF:   splunk btool props list --debug   Due to that I tried adding this props.conf directly on the UF and it is working (but it is not a good solution for us because we don't want to force the local processing on the UF).   [audittrail] SHOULD_LINEMERGE = false SEDCMD = s/\d{2}-\d{2}-\d{4} \d{2}:\d{2}:\d{2}\.\d{3}.* INFO AuditLogger - //g force_local_processing = true   I believe the issue is related to the fact that the audit logs from UF are sent to HF indexQueue instead of parsingQueue.  I tried also to add the audit.conf file both in UF and HF as follow without any luck:   [default] queueing=false   Reading further on Splunk documentation (https://docs.splunk.com/Documentation/Splunk/8.0.4/Admin/Auditconf) :   queueing = <boolean> * Whether or not audit events are sent to the indexQueue. * If set to "true", audit events are sent to the indexQueue. * If set to "false", you must add an inputs.conf stanza to tail the audit log for the events reach your index. * Default: true   My questions are: Do you know what does it means "If set to "false", you must add an inputs.conf stanza to tail the audit log for the events reach your index." Do you have any idea on how to apply on the HF the props.conf to the audit events coming from the UF without having to deploy it directly on UF with  force_local_processing=true Thanks a lot, Edoardo  
This is the piece of code i tried so far but the join part is not working for me i don't know why  ((index="ata" sourcetype="s:sv" y_id>=4 te>= [| makeresults |eval start_date=strftime(relative_tim... See more...
This is the piece of code i tried so far but the join part is not working for me i don't know why  ((index="ata" sourcetype="s:sv" y_id>=4 te>= [| makeresults |eval start_date=strftime(relative_time(now(), "-30d@d"),"%Y-%m-%dT%H:%M:%SZ") | fields start_date | return $start_date] earliest=-90d@d [|join type="inner" id [search index="ys_kb" sourcetype="lys:b_l" y_id>=4 ble=1 | dedup id | fields id |return id ]]) OR (index="s_ata" sourcetype="lys:h_xl" os=* earliest=-90d@d))
Is there a way to calculate how the index size would be, if they wouldn't be replicated - i.e. how much disk size I would need per indexer if replication factor == number of indexers? The values d... See more...
Is there a way to calculate how the index size would be, if they wouldn't be replicated - i.e. how much disk size I would need per indexer if replication factor == number of indexers? The values displayed in the Monitoring Console (e.g. Index Detail: Deployment) seem to show the sum of the index size across all indexers Also https://community.splunk.com/t5/Archive/How-to-calculate-the-index-size-from-all-indexers/m-p/96940 goes into that direction, but I also can only see the summed size or the size per indexers.
Hi Team, I want to make an expand and collapse menu using simple XML. I want to create a panel heading which has  "+" sign .When i click in the "+" sign , it should display the chart panel and "+... See more...
Hi Team, I want to make an expand and collapse menu using simple XML. I want to create a panel heading which has  "+" sign .When i click in the "+" sign , it should display the chart panel and "+" sign converts to "-" . And when click on "-", panel collapse. I want to achieve this using CSS in simple XML. Please suggest. Thanks if anyone could help.
Hi, I have a dataset with column name as  WiFi_txop0  and values as 48,54,76,78,87,77,254311,65,99,65,.......... I want to replace the value of 254311 as 0 so that i could get a good average. I am ... See more...
Hi, I have a dataset with column name as  WiFi_txop0  and values as 48,54,76,78,87,77,254311,65,99,65,.......... I want to replace the value of 254311 as 0 so that i could get a good average. I am using following query. index=mmm | stats avg(aWiFi_txop0) as WiFi_txop0 | eval WiFi_txop0_new = if(WiFi_txop0 > 100, 0, WiFi_txop0) | eval usage_percent = round(WiFi_txop0_new,0) | fields + usage_percent   But i am not getting result as 0. Please help.   Thanks 
Hi, I'm trying to use Splunk to provide a report on servers where a service is absent. So I have one event per service per host. So if there are 10 services running on 1 host, that is 10 different e... See more...
Hi, I'm trying to use Splunk to provide a report on servers where a service is absent. So I have one event per service per host. So if there are 10 services running on 1 host, that is 10 different events. My idea was to do a search which combines all of the services on a host into a single field and then search where that field doesn't contain the value I am looking for, but I have no idea how to achieve this. Here are a couple of sample raw events from the same host   20200702162757.583428 Caption=Remote Desktop Configuration Description=Remote Desktop Configuration service (RDCS) is responsible for all Remote Desktop Services and Remote Desktop related configuration and session maintenance activities that require SYSTEM context. These include per-session temporary folders, RD themes, and RD certificates. Name=SessionEnv PathName=C:\WINDOWS\System32\svchost.exe -k netsvcs StartMode=Manual StartName=localSystem State=Running Status=OK wmi_type=Service 20200702162757.583428 Caption=Symantec Endpoint Protection WSC Service Description=Allows Symantec Endpoint Protection to report status to the Windows Security Center. Name=sepWscSvc PathName="C:\Program Files (x86)\Symantec\Symantec Endpoint Protection\14.3.558.0000.105\Bin64\sepWscSvc64.exe" StartMode=Auto StartName=LocalSystem State=Running Status=OK wmi_type=Service    Assume I want to return hosts where the second service entry is absent.
Hello everyone.....I have been trying to get CPU time for different workloads. However, for some workloads I am getting multiple entries of CPU Time.....how do i avoid getting multiple entries? Plea... See more...
Hello everyone.....I have been trying to get CPU time for different workloads. However, for some workloads I am getting multiple entries of CPU Time.....how do i avoid getting multiple entries? Please see the query I am working on below... | fields SMF30JBN DATETIME SMF30CPT | eval Job_Name=SMF30JBN, Date = substr(DATETIME,1,10) | eval WORKLOAD = substr(Job_Name,1,3) | eval CP_Time=SMF30CPT | eval cpu_time=strptime(SMF30CPT,"%H:%M:%S.%2N") | eval base=strptime("00:00:00.00","%H:%M:%S.%2N") | eval ctime=cpu_time-base | eval ctime=round(ctime, 2) | stats values(ctime) as CPU_TIME by WORKLOAD Date