All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We are a start up company building apps similar to Netflix and YouTube. So along with each platform we would like to create a analytics dashboard for each apps which include reports like user visits,... See more...
We are a start up company building apps similar to Netflix and YouTube. So along with each platform we would like to create a analytics dashboard for each apps which include reports like user visits, content views, watch time..etc. Is that possible with Splunk.
Hello.  I have a large data set that I'm working through that gives either a 5 digit number or a "-" if there is no value. I have my search results but I can't seem to get them into the format I'm l... See more...
Hello.  I have a large data set that I'm working through that gives either a 5 digit number or a "-" if there is no value. I have my search results but I can't seem to get them into the format I'm looking for.  I'd like to get the results into a format showing Room 1  Set (total) Unset (total) And the same for Room 2, 3, 4   Query Index=acme dvc_room="*" station="*"  Output  index=acme dvc_room=4 station="-" index=acme dvc_room=3 station="123456" index=bluecoat dvc_room=2 station="-" index=bluecoat dvc_room=1 station="56132" index=bluecoat dvc_room=3 station="-" index=bluecoat dvc_room=2 station="56132" index=bluecoat dvc_room=4 station="56132"   Any help would be appreciated. 
Alert is not triggered via email. I have a search that returns more event count (apprx more than 10k for 6hrs)which is affecting the default value setting responsible for email trigger:I have create... See more...
Alert is not triggered via email. I have a search that returns more event count (apprx more than 10k for 6hrs)which is affecting the default value setting responsible for email trigger:I have created a summary index as well and to run the report once in a week still the email is not being triggered because of huge event count. Below is the option i have got to modify the default values:   $SPLUNK_HOME/etc/system/local/limits.conf [scheduler] max_action_results = 175000 [searchresults] maxresultrows = 175000 $SPLUNK_HOME/etc/system/local/alert_actions.conf [default] maxresults = 175000 this enables an email alert containg a .csv to have 175k rows Is there any other option other than above,as it is not recommened to change my default value in our infra so that we can send the report via email?  
I have a Slunk dashboard where I want to have a input box for user. By default 3 rows must be shown with 2 column. one column for comment and second for current date. Current date must be populated ... See more...
I have a Slunk dashboard where I want to have a input box for user. By default 3 rows must be shown with 2 column. one column for comment and second for current date. Current date must be populated automatically and user should not enter the date manually. User can be given an option to add more rows / more entries and a save button once inputs are done.(may be outputlook on any csv ,that's fine) I have done this by using a lookup file(where user is manually adding dates ) and loading it on a table  on dashboard .However that's not the approach I am looking for. Would appreciate if I could get some leads. TIA.
Hi all, I'm new here, so please let me know if I'm doing anything wrong. Otherwise, the below is my issue. Say for example I have the following type of information logged. Level=Error Exception: Ty... See more...
Hi all, I'm new here, so please let me know if I'm doing anything wrong. Otherwise, the below is my issue. Say for example I have the following type of information logged. Level=Error Exception: Type1 *********************************** Level=Error Exception: Type2 *********************************** Level=Error Exception: Type3 *********************************** ... *********************************** Level=Error Exception: Type10   I have a splunk search like the following: index=foo* earliest=-1w latest=@d Level=Error Exception="Type2" | spath Exception | rename Exception as Type | bucket span=1d _time | stats count by _time, Type | stats avg(count) as Average by Type | table Type, Average This particular search will return the correct calculation for average for the Type.   However, upon removing the following line from the search above:  Exception="Type2" From my understanding, the search will now open up the calculation of averages for all the types. However, this causes the calculation of the Average to be incorrect in the table.   Could you all please help me out to understand why the calculation is inaccurate for the exisiting Type2 as well as the rest after opening up the search for all of the Types?  
I have 3 data sets that I'm trying to merge and count. Data set 1 my_id   |  company_id  |  company_name  | my-type 100     |  8634535     |  Target        | COMP 200     |  0583509     |  Disney... See more...
I have 3 data sets that I'm trying to merge and count. Data set 1 my_id   |  company_id  |  company_name  | my-type 100     |  8634535     |  Target        | COMP 200     |  0583509     |  Disney        | COMP 300     |  2095497     |  Starbucks     | COMP 400     |  6433241     |  Microsoft     | COMP   Data Set 2 some-id  |  my-group-name  | my-type 100      |  ABC            | GROUP 200      |  EFG            | GROUP 400      |  XYZ            | GROUP Data Set 3 some-id  |  error-code |  error-descr    | my-type 100      |  900        |  descr for 900  | ERR 200      |  922        |  descr for 922  | ERR 200      |  923        |  descr for 923  | ERR ======== Results I’m trying to get: COMPANY_ID  |  COMPANY_NAME |  GROUP  |  ERR_CODE   |  ERR_DESCR | COUNT | PERCENT 8634535     |  Target        |  ABC         |  900        |  descr for 900   | 5     | 10 0583509     |  Disney        |  EFG         |  922        |  descr for 922   | 10    | 20 0583509     |  Disney        |  EFG         |  923        |  descr for 923   | 10    | 20 2095497     |  Starbucks     |                 |                 |                          | 23    | 46 6433241     |  Microsoft     |  XYZ         |               |                          | 2     | 4   I've tried joining the data but I only seem to get rows where data is available in all data sets. My counts are off. It looks like the results of the last join are just being repeated. In the joins I specify JOIN_ID since the values are stored in different fields (field my-id in data set 1 and field some-id in data sets 2 and 3). Maybe this is the issue? My search: index="index1" my-type="COMP" | rename my_id as JOIN_ID, company_id as COMPANY_ID, company_name as COMPANY_NAME | join type=left max=10 JOIN_ID [search index="index2" my-type="GROUP" | table my-group-name | rename my-group-name as GROUP ] | join type=left max=10 JOIN_ID [search index="index2" my-type="ERR" | table error-code, error-descr | rename error-code as ERR_CODE, error-descr as ERR_DESCR] top COMPANY_ID  , COMPANY_NAME ,  GROUP  ,  ERR_CODE   ,  ERR_DESCR  | rename count as COUNT, percent as PERCENT | eval PERCENT=round(PERCENT,2) | addcoltotals COUNT   I tried top and stats as well but same results. Any pointers? Thank you.
Hello, I need to check to see if Syslog data is reaching my forwarders.  What would be the best query to use to check this?
I've pieced together some SPL that shows me the last time the forwarder has sent its log data, but need to convert the  | eval Hour =relative_time(_time,"@h") to normal date-time format, i.e.  HH:MM:... See more...
I've pieced together some SPL that shows me the last time the forwarder has sent its log data, but need to convert the  | eval Hour =relative_time(_time,"@h") to normal date-time format, i.e.  HH:MM:SS.  Any help is greatly appreciated!  index=_internal sourcetype=splunkd group=tcpin_connections component=Metrics | eval sourceHost=coalesce(hostname, sourceHost) | rename connectionType as connectType | eval connectType=case(fwdType=="uf","univ fwder", fwdType=="lwf", "lightwt fwder",fwdType=="full", "heavy fwder", connectType=="cooked" or connectType=="cookedSSL","Splunk fwder", connectType=="raw" or connectType=="rawSSL","legacy fwder") | fillnull version value="pre 4.2" | rename version as Ver arch as MachType | fields _time,connectType sourceIp sourceHost destPort kb tcp_eps tcp_Kprocessed tcp_KBps splunk_server Ver MachType | eval Indexer= splunk_server | eval Hour=relative_time(_time,"@h") | stats avg(tcp_KBps) as avg_TCP_KBps avg(tcp_eps) as avg_TCP_eps sum(kb) as total_KB by Hour connectType sourceIp sourceHost MachType destPort Indexer Ver      
Hello guys I have the following scenario: I'm receiving a lot of logs from a Kubernetes Clusters I'm sending logs from Kubernetes to a Splunk Heavy Forwarder using Splunk Connect for Kubernetes ... See more...
Hello guys I have the following scenario: I'm receiving a lot of logs from a Kubernetes Clusters I'm sending logs from Kubernetes to a Splunk Heavy Forwarder using Splunk Connect for Kubernetes The sourcetypes names are assigned by Splunk Connect using a structure like this: kube:container:* (example: kube:container:containerNumberOne) I have the following confs in props.conf and transforms.conf files: [(?::){0}kube:*] NO_BINARY_CHECK = 1 SHOULD_LINEMERGE = true TRANSFORMS-set= setnull, allowEvents, dropEventsByText, dropEventsBySourcetype, set_sourcetype   [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [allowEvents] REGEX = LOG_INI|LOG_FINOK|LOG_FINEX|LOG_FINNEG DEST_KEY = queue FORMAT = indexQueue [dropEventsBySourcetype] SOURCE_KEY=MetaData:Sourcetype REGEX = containerNumberOne|containerNumberTwo DEST_KEY = queue FORMAT = nullQueue [dropEventsByText] REGEX = debug|DEBUG DEST_KEY = queue FORMAT = nullQueue [set_sourcetype] SOURCE_KEY=MetaData:Sourcetype REGEX = kube\:container\:(.*)\-re\- FORMAT = sourcetype::$1 DEST_KEY = MetaData:Sourcetype These filters (and the sourcetype rename) have been working well for a while and as you may observe, they filter events based on a text contained in the log or by a text in the sourcetypes name. The problem is that I have a new requirement. I need to drop events based on 2 rules at the same time: a sourcetype name and a text in the log. Specifically, there are some logs with the sourcetype name containerFour and the text LOG_INI that I need to drop. I guess I need something like this (but I know te conf is wrong): [dropEventsBySourcetypeAndText] SOURCE_KEY=MetaData:Sourcetype REGEX = containerNumberFour REGEX = BCI_INI DEST_KEY = queue FORMAT = nullQueue     Does someone know what i need to do? Thanks in advance   
Hi Team,   Due to EOL of Adobe flash, below video can no longer be viewed. https://www.splunk.com/view/SP-CAAACZW Could you please convert this video in supported format?   Thanks,
login with splunk local admin account after Single sign on was enable. We started using SAML for the SSO so now we don't type any user/password, but I was wonder if I can still login using splunk loc... See more...
login with splunk local admin account after Single sign on was enable. We started using SAML for the SSO so now we don't type any user/password, but I was wonder if I can still login using splunk local admin account, how can I do that
We are monitoring users who are deleting tables in our system. We have a field "user_query" which I want to parse by ";". Then for each split result, I want to check for the text "Drop table *". In a... See more...
We are monitoring users who are deleting tables in our system. We have a field "user_query" which I want to parse by ";". Then for each split result, I want to check for the text "Drop table *". In addition, they are allowed to drop tables that are from a "temp" dataset or have _tmp in the table name. The reason I want to parse each statement by ";" is that if I do a -(*temp.*) or -(*_tmp), it will not return the whole "user_query" but in latter statements, the user may have written "Drop table [tablename_they_should_not_drop]". So basically, I want to split "user_query" by ";" then go through each split result and check for the text "Drop table *" -(*temp.*) or -(*_tmp). If any instances, I want to return the full "user_query" in a table in a Splunk alert. I do not know if this can be done using regex or I need multiple steps to split and then loop through each result.  Help please! TIA
Hi, I have configured Splunk email via Server Settings  - Email Settings: Mailhost: smtp-mail.outlook.com:587 Enable Ssl: true Username: xxx@outlook.com Password: xxx I am trying to test ... See more...
Hi, I have configured Splunk email via Server Settings  - Email Settings: Mailhost: smtp-mail.outlook.com:587 Enable Ssl: true Username: xxx@outlook.com Password: xxx I am trying to test this mail configuration via the following command: index=main | head 5 | sendemail to="my-email" server="smtp-mail.outlook.com:587" subject="Here is an email notification" message="This is an example message" sendresults=true inline=true format=raw sendpdf=true The error I got is: command="sendemail", ('No cipher can be selected.',) while sending mail to: my-email It does not seem a SMTP error. Anyone who can help? Thanks!  
Hey guys I've been having trouble finding documentation about removing indexed data. After looking through the "meta woot!" app I saw my hosts were growing a few thousand a day and my estreamer app ... See more...
Hey guys I've been having trouble finding documentation about removing indexed data. After looking through the "meta woot!" app I saw my hosts were growing a few thousand a day and my estreamer app was logging all .log files as new hosts. I have fixed the logging issue by changing the monitoring string and host segment portion in the .conf file but now looking to remove the .logs file from the host field. Has anyone ever had an issue like this and know a fix or can point me into the right direction thanks.
We are trying to set an alert for a sub_A to trigger if no data is sent  in 1 hour duration.  The previous splunk expert wrote the search below, and i was under the impression to change the "+24h@h" ... See more...
We are trying to set an alert for a sub_A to trigger if no data is sent  in 1 hour duration.  The previous splunk expert wrote the search below, and i was under the impression to change the "+24h@h" to "1h@h" and "86400",  to 3600 would change the parameter of the alert. | where now()>relative_time(LastFileXfer, "+24h@h") | eval DaysOld=round((now() - round(LastFileXfer, 0))/86400, 2)   Does this need to be changed when saving the alert in menu section of the alert? -----Thank you-----   ------Search------ index=dart_index source=OPS_NIPR_DART_DMZ_IncomingOutgoing status_message="OK" earliest=-48h@h subscription_name IN ("Sub_A") | eval DeliveryComplete=strptime(delivery_complete, "%Y-%m-%d %H:%M:%S") | stats values(src_host) as Source, values(dest_host) as Destination, values(login_name) as DataOwner, values(host_name) as DartNode, values(xfer_type) as XferMethod, min(DeliveryComplete) as EarliestFileXfer, max(DeliveryComplete) as LastFileXfer by subscription_name | where now()>relative_time(LastFileXfer, "+24h@h") | eval DaysOld=round((now() - round(LastFileXfer, 0))/86400, 2) | eval EarliestFileXfer=strftime(EarliestFileXfer, "%Y-%m-%d %H:%M:%S") | eval LastFileXfer=strftime(LastFileXfer, "%Y-%m-%d %H:%M:%S") | table subscription_name Source Destination DataOwner DartNode XferMethod EarliestFileXfer LastFileXfer DaysOld
Running Splunk  Splunk Enterprise, Version:7.3.3Build:7af3758d0d5e, we can not use timeline wiz as we have random errors with the message "Failed to load source for Event Timeline Viz visualization."... See more...
Running Splunk  Splunk Enterprise, Version:7.3.3Build:7af3758d0d5e, we can not use timeline wiz as we have random errors with the message "Failed to load source for Event Timeline Viz visualization." Plugin version of "Event Timeline Viz" is 1.5.0, last one available (https://splunkbase.splunk.com/app/4370/) Anyone can help  ????? We are running after this since quite a lot now. We thought it was something related with the browser cache but we did not manage to understand the real root cause. We tried different browser, same behaviour. Chrome | Firefox have similar statistic on the error. Running the browse in the server where SPLUNK runs seems to reduce the statistic of the error occurrence.
My Splunk enterprise server (v7.2.6) is on RHEL 6. In outputs.conf, tcpout is configured to send data to a remote host (ArcSight). But the Splunk server's metrics.log shows _tcp_Bps etc. all 0. My S... See more...
My Splunk enterprise server (v7.2.6) is on RHEL 6. In outputs.conf, tcpout is configured to send data to a remote host (ArcSight). But the Splunk server's metrics.log shows _tcp_Bps etc. all 0. My Splunk server was able to send data last month. But it stopped recently. What things can I check to trouble-shoot this issue?  
Hello I am trying to find users who have logged into more than one system within the last 30 minutes. I want to return a list of users who have logged into more than one system during that time fram... See more...
Hello I am trying to find users who have logged into more than one system within the last 30 minutes. I want to return a list of users who have logged into more than one system during that time frame.  The Stats function of the search does not seem to pull any results after finding all the login sessions after looking at job inspection.  The stats function is suppose to find distinct users where hosts is greater than 1.  index ="Wawf" L_Action="New session" earliest=-30min latest=now |stats dc(L_User) as users dc(Linux_Server) as hosts by L_User,Linux_Server |where hosts>1 | table L_User, Linux_Server    
I stood up a clean install of Splunk in AWS using their latest published AMI (currently Splunk Enterprise 8.1.1 running on Amazon Linux 2). This install has a local user named splunk (with a group na... See more...
I stood up a clean install of Splunk in AWS using their latest published AMI (currently Splunk Enterprise 8.1.1 running on Amazon Linux 2). This install has a local user named splunk (with a group named splunk) under which the splunkd process is running.   $ top -u splunk PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4613 splunk 20 0 630312 166068 54880 S 0.7 1.0 0:49.25 splunkd 4766 splunk 20 0 105488 13984 5168 S 0.3 0.1 0:01.60 splunkd 4795 splunk 20 0 1642752 65616 28848 S 0.3 0.4 0:13.24 mongod 4874 splunk 20 0 2669968 68704 15300 S 0.0 0.4 0:06.45 python3.7 5154 splunk 20 0 197692 53536 42432 S 0.0 0.3 0:03.05 splunkd   With the instance running, without making any changes, it appears that all new files created by Splunk are getting 600 file permissions. For example, using the web interface under the default admin account and creating a new app-shared dashboard named test results in the following file/permissions:   $ ls -al /opt/splunk/etc/apps/search/local/data/ui/views/ total 4 drwx------ 2 splunk splunk 22 Jan 11 18:53 . drwx------ 3 splunk splunk 19 Jan 11 18:53 .. -rw------- 1 splunk splunk 46 Jan 11 18:53 test.xml   This is not aligned with the umask value of the splunk user. e.g.,   $ id uid=1001(splunk) gid=1001(splunk) groups=1001(splunk) $ umask 0002 $ cd /opt/splunk/etc/apps/search/local/data/ui/views/ $ touch test2.xml && ls -al total 4 drwx------ 2 splunk splunk 39 Jan 11 19:23 . drwx------ 3 splunk splunk 19 Jan 11 18:53 .. -rw-rw-r-- 1 splunk splunk 0 Jan 11 19:23 test2.xml -rw------- 1 splunk splunk 46 Jan 11 18:53 test.xml   Is this behavior of creating files with owner-restricted permissions (not matching configured umask) expected? If yes (working as expected), is there somewhere in Splunk where these default file permissions can be configured?
Hello, I would like to retreive multiple value into a single field. Below an example of log where I would like to extract the value after "sha256":" until the next "    [{"overall_weight":0,"anom... See more...
Hello, I would like to retreive multiple value into a single field. Below an example of log where I would like to extract the value after "sha256":" until the next "    [{"overall_weight":0,"anomaly_types":0,"signature":"DUA.Downselect.PDF.FEBeta","sha256":"babee76d75c74c527c3b836b143277b8d60e4300ab2ebfeb92ed41c6e4b044d3","file_type":36,"uuid":"23e6d432-e357-4f21-b5fe-d596c7e5afec"}, {"overall_weight":0,"anomaly_types":0,"signature":"FAUDE.Downselect.FEBeta","sha256":"5f0708914b9cebd186f48e5574f54fd01927c9a0d48c1941b01e84d8d14de8e6","file_type":36,"uuid":"11e0b0ef-c09f-441e-9a0d-d3fb1ed1a612"}, {"overall_weight":0,"anomaly_types":2048,"signature":"FAUDE.Downselect.FEBeta","sha256":"fd6dd07ea0814a073c437781f7fc85c2ed8e1ccc28e17f19a8f670e419d7f3a6","file_type":36,"uuid":"4fb4310b-61e5-4410-8e5b-b8c775878958"}, {"overall_weight":0,"anomaly_types":2048,"signature":"FAUDE.Downselect.FEBeta","sha256":"ac5de15540b5572e23828e227b800afb65b30f8783ea71d15b842e3f22fd45b8","file_type":36,"uuid":"679ee174-12f1-45df-9fdc-97c9eb53b7d4"}]   The return should be like below : SHA256 babee76d75c74c527c3b836b143277b8d60e4300ab2ebfeb92ed41c6e4b044d3 5f0708914b9cebd186f48e5574f54fd01927c9a0d48c1941b01e84d8d14de8e6 fd6dd07ea0814a073c437781f7fc85c2ed8e1ccc28e17f19a8f670e419d7f3a6 etc etc Can someone help me please ?