All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi   I have this search which graphs calls to phone numbers index=myindex sourcetype=mysource Number IN (5551,5555,55557,3249,2850,3353,6150) | timechart span=1h count by Number   I have a diff... See more...
Hi   I have this search which graphs calls to phone numbers index=myindex sourcetype=mysource Number IN (5551,5555,55557,3249,2850,3353,6150) | timechart span=1h count by Number   I have a different search that returns bad calls.   index=myindex sourcetype=mysource "1 4 3 1" | stats count as BAD I want to overlay the count of bad calls from search 2 on the timechart from search one
Hi Splunkers, I read a little and came to know Splunk DB Connect uses JDBC driver to fetch events. It means Indexer or HF would be requesting these events from table at certain interval. Does Splu... See more...
Hi Splunkers, I read a little and came to know Splunk DB Connect uses JDBC driver to fetch events. It means Indexer or HF would be requesting these events from table at certain interval. Does Splunk DB Connect  need UF??      
Hi, We always place props.conf in parsing app. Today I saw a config where - props.conf is placed inside monitoring app. The architecture here is UF --> HF --> Indexer In all the cases props.conf ... See more...
Hi, We always place props.conf in parsing app. Today I saw a config where - props.conf is placed inside monitoring app. The architecture here is UF --> HF --> Indexer In all the cases props.conf is placed under parsing app, why only in one case it is placed under monitoring app?
Hi Splunkers, Does anyone created a carousel dashboard in which a certain group of panels in one slide then another group of panels in another slide. I've been researching this thing in a week and... See more...
Hi Splunkers, Does anyone created a carousel dashboard in which a certain group of panels in one slide then another group of panels in another slide. I've been researching this thing in a week and unfortunately I found 2 results which I found not compatible to what I am looking for. The first one is https://splunkbase.splunk.com/app/4342/ from splunkbase which has the carousel visualization but only works in single value visualization. The other one was https://www.splunk.com/en_us/blog/tips-and-tricks/animate-simplexml-dashboard-elements-with-a-carousel.html a jquery library animation in which consumes a lot of spaces in a dashboard and only works on a single panel. I hope you guys help me for this. Thank you.
hello     `weatherevent` | fields host EventCode | eval time = strftime(_time, "%m/%d/%Y %H:%M") | search host=tutu | eval EventCode=case(EventCode==41, "System reboot without cleanly shutting... See more...
hello     `weatherevent` | fields host EventCode | eval time = strftime(_time, "%m/%d/%Y %H:%M") | search host=tutu | eval EventCode=case(EventCode==41, "System reboot without cleanly shutting down first", EventCode==10111, "Device offline due to a user-mode driver crash", EventCode==10110, "Problem with ouser-mode drivers") | stats count as ID by EventCode | eval ID=if(isnull(ID),"No critical ID for this host",ID)   I need to display a message in a chart panel if there is no results I have done it in a eval command but it doesn't works could you help me please?? 
Hi Splunkers, Can we install Splunk DB connect on deployment server to forward MS SQL audit logs to the indexers?? Or is there any alternative way to send MS SQL audit logs without using Splunk DB ... See more...
Hi Splunkers, Can we install Splunk DB connect on deployment server to forward MS SQL audit logs to the indexers?? Or is there any alternative way to send MS SQL audit logs without using Splunk DB Connect?  
Hi, bit new to splunk, looking for suggestions on one of my search queries: Here's some sample events that I receive { "inbound.network.service": "192.168.1.7, 50990", "inbound.bytes.count": 10... See more...
Hi, bit new to splunk, looking for suggestions on one of my search queries: Here's some sample events that I receive { "inbound.network.service": "192.168.1.7, 50990", "inbound.bytes.count": 10000, "connection.created" : 1603614820 } { "inbound.network.service": "192.168.1.20, 50700", "inbound.bytes.count": 40000, "connection.created" : 1603555900 } { "inbound.network.service": "192.168.1.7, 50990", "inbound.bytes.count": 10200, "connection.created" : 1603614820 } { "inbound.network.service": "192.168.1.20, 50700", "inbound.bytes.count": 43000, "connection.created" : 1603555900 } { "inbound.network.service": "192.168.1.7, 50990", "inbound.bytes.count": 500, "connection.created" : 1603679500 } Details about individual fields: - the "inbound.network.service" field has the endpoint for each connection in my server - "inbound.bytes.count" will have the cummulative inbound bytes count for a particular inbound connection. This count gets resetted once the connection gets reset. - "connection.created" has the timestamp when the connection got created. The timestamp value remains the same as long as the connection is valid. It gets resetted to a new timestamp once the connection is re-established. The "inbound.bytes.count" also gets re-initialized when that happens. Here's my existing search query to list bytes received for a particular inbound network & service: index=inettraffic source="my-server" inbound.network.service="192.168.1.7, 50990" | reverse | delta connection.created as connection_reset | delta inbound.bytes.count as bytes_diff | eval bytes_received = case(connection_reset == 0, bytes_diff, connection_reset > 0, 'inbound.bytes.count', true(), 0) | table _time, inbound.network.service, connection.created, connection_reset, inbound.bytes.count, bytes_diff, bytes_received Questions: (1) Is there a better how this can be done ? (2) I am looking to get a similar tabular listing of bytes received over _time for all inbound network & services that I have in my events so far. How do I do this ? With delta, it ends up taking a difference of the previous event which may not be for the same inbound.network.service. Appreciate all your help here, thanks.
Hello Splunkers, We need to fetch MS SQL logs into our Splunk. We aren't allowed to use Splunk DB Connect for some security reasons. Currently what we are deciding is to log all MS SQL audit event... See more...
Hello Splunkers, We need to fetch MS SQL logs into our Splunk. We aren't allowed to use Splunk DB Connect for some security reasons. Currently what we are deciding is to log all MS SQL audit events, trace events in Standard Windows Application events logs. And fetch logs from  Windows Application Logs by UF. Will this  "Splunk Add-On For MS SQL Server"  helpful for us in any way now? I guess this add on has dependency on "Splunk DB Connect". I'm afraid with one more aspect that is  will all fields  of MS SQL  audit events be parsed????  Please check this community link, regarding parsing issue https://community.splunk.com/t5/Splunk-Enterprise-Security/Monitoring-MS-SQL-logs-from-Windows-Event-viewer/m-p/439031 
I have created my account on splunk(cloud) and I have created token. When I am trying to push logs to splunk  I am running into below error. Please help   PROMPT> ``` docker run --publish 80:80... See more...
I have created my account on splunk(cloud) and I have created token. When I am trying to push logs to splunk  I am running into below error. Please help   PROMPT> ``` docker run --publish 80:80 --log-driver=splunk --log-opt splunk-token=<MyTokenHere> --log-opt splunk-url=https://prd-p-9140a.splunkcloud.com --log-opt splunk-insecureskipverify=true --log-opt splunk-sourcetype=docker-test --log-opt splunk-index=main nginx ``` ----------| ERROR |---------- docker run --publish 80:80 --log-driver=splunk --log-opt splunk-token=75d04357-00c3-4f87-b3f7-4d1169b2b757 --log-opt splunk-url=https://prd-p-9140a.splunkcloud.com --log-opt splunk-insecureskipverify=true --log-opt splunk-sourcetype=docker-test --log-opt splunk-index=main nginx docker: Error response from daemon: failed to initialize logging driver: splunk: failed to verify connection - 404 Not Found - <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <!-- This is a static HTML string template to render errors. To edit this template, see appserver/mrsparkle/lib/error.py. --> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:splunk="http://www.splunk.com/xhtml-extensions/1.0" xml:lang="en"> <head> <meta http-equiv="content-type" content="text/html; charset=utf-8" /> <meta name="referrer" content="never" /> <meta name="referrer" content="no-referrer" /> <link rel="shortcut icon" href="/en-US/static/@7947EF3EB0C8EDFF887EFA69E21E6263479009821EE94F07A47FD3FB0E57B541/img/favicon.ico" /> <title>Page not found! - Splunk</title> <style> @font-face { font-family: "Splunk Sans"; src: url('/static/fonts/proxima-regular-webfont.woff') format('woff'); font-weight: 400; font-style: normal; } @font-face { font-family: "Splunk Mono"; src.   Please help
Greetings,   Is there any way to query Splunk to see if host disk drives have excessive write activity vs. read activity?  Trying to monitor for Ransomware infections.  Lots of write activity might... See more...
Greetings,   Is there any way to query Splunk to see if host disk drives have excessive write activity vs. read activity?  Trying to monitor for Ransomware infections.  Lots of write activity might indicate files being encrypted/modified. I am a consumer of our Splunk environment so I don't have the ability to change the environment and I only have access to indexes for which my hosts reside.  For example I don't have access to the _internal index.  Thanks for any help.
HI Can i setup  Splunk for continues  traffic capture from cisco switch , i need to collect  capture network traffic for 30 day  period ,Any article's or step's on how to do that.   thanks
Hello. I'm trying to create a field for all events in a search. The field is a value from a inpulookup. There is no shared fields between the lookup and the search in the conventional sense. The org... See more...
Hello. I'm trying to create a field for all events in a search. The field is a value from a inpulookup. There is no shared fields between the lookup and the search in the conventional sense. The organization of my lookup is as follows ID     email1                            email2                           email3 1      ex1@gmail..com        ex2@gmail..com       ex3@gmail..com 2     ex4@gmail..com        ex5@gmail..com        ex6@gmail..com 3     ex7@gmail..com        ex8@gmail..com         ex9@gmail..com 4     ex10@gmail..com      ex11@gmail..com      ex12@gmail..com     |inputlookup email.csv | search ID = "1" | strcat email1", " email2", " email3 emails | table emails   The above searches gives me my desired output of emails=ex1@gmail.com, ex1@gmail.com, ex1@gmail.com   But when I pop in into an eval statement to give each event that field/value I get an error about a malformed eval. Below is the eval I am trying to do.   index=main (insert search here) |eval test =[|inputlookup email.csv | search ID = "1" | strcat email1", " email2", " email3 emails | return $emails ]     Any help would be greatly appreciated. Thanks!
Hello everybody I installed Microsoft Azure Add on for Splunk (https://splunkbase.splunk.com/app/3757/#/details) and connected it with Metrics in Azure. However, only a few metrics are linked. T... See more...
Hello everybody I installed Microsoft Azure Add on for Splunk (https://splunkbase.splunk.com/app/3757/#/details) and connected it with Metrics in Azure. However, only a few metrics are linked. The most important NATGateway Metric (Microsoft.Network/natGateways) does not work. I hope the helper appears in my two questions.   1. Is there a way to get NATGateway data using Microsoft Azure Add on for Splunk?   2. Recommend other add-ons to integrate NATGateway.   Thank you.
Hi, I have log like following   rid=iqwenoasd service=CP scopes=add-w,oot-s fields=birthdate,emails,identifier issuer=AWS CA TE empty=   my expected key-value pairs are like this   rid=iqwenoa... See more...
Hi, I have log like following   rid=iqwenoasd service=CP scopes=add-w,oot-s fields=birthdate,emails,identifier issuer=AWS CA TE empty=   my expected key-value pairs are like this   rid=iqwenoasd service=CP scopes=add-w,oot-s fields=birthdate,emails,identifier issuer=AWS CA TE empyt=null   since there are some special cases in it (value has space character, value is empty), how can I use REGEX in search servers to achieve that? it seems using ([^=]+)=(.+) doesn't work in transforms.conf   [cp_logs_report] REGEX = ([^=]+)=(.+) FORMAT = $1::$2    in props.conf   [my_sourcetype] REPORT-cp-logs-ext = cp_logs_report    
Hi Team, I started splunk with the below command after installation: ./splunk start --no-prompt --accept-license I then created a user using the below CLI command: splunk add user asing13 -role A... See more...
Hi Team, I started splunk with the below command after installation: ./splunk start --no-prompt --accept-license I then created a user using the below CLI command: splunk add user asing13 -role Admin -password notmypassword But now I don't get asked a password when start or stop splunk. How can I enable userid and password check for starting and stopping splunk. I have already gone through the below documentation but did not find my answer. Tried creating file and the hashed password and all stuff written. Maybe I missed something : https://docs.splunk.com/Documentation/Splunk/8.1.0/Security/Secureyouradminaccount Took backup of ldap.conf file as well to check if that could be the reason but still splunk doesn't ask for userid and password. Any help would be highly appreciated ! Thank you in Advance !   Regards, Abhishek Singh
Is there a limit to how many accelerated reports you can run? I would like to run hundreds of them. I'm assuming the amount of resources I have determines that. I have something close to the minimum ... See more...
Is there a limit to how many accelerated reports you can run? I would like to run hundreds of them. I'm assuming the amount of resources I have determines that. I have something close to the minimum requirements found here https://docs.splunk.com/Documentation/Splunk/8.1.0/Capacity/Referencehardware 
I am trying to extract a portion of the source as a field. Here's what the source looks like:   D:\Host Logs\info.server.02.mfl   I'm trying to extract "info" from the source filename and this wo... See more...
I am trying to extract a portion of the source as a field. Here's what the source looks like:   D:\Host Logs\info.server.02.mfl   I'm trying to extract "info" from the source filename and this works perfectly as a splunk search:   search | rex field=source "D:\\\Host Logs\\\(?<newfield>[\w]+).\w+.\w+.(mfl|MFL)"   But if I put this in props.conf for this sourcetype as a search-time extraction, it's not working:   EXTRACT-sourcefield = "D:\\\Host Logs\\\(?<newfield>[\w]+).\w+.\w+.(mfl|MFL)" in source   I've tried multiple versions of this, taking out the extra "\", removing the quotes, etc., but cant seem to get this field to extract. A "splunk btool props list..." shows that the props.conf file is being used. It's in an app in /opt/splunk/etc/apps. This is Splunk Enterprise 8.0.4.1. Edit: Here are all the versions I've tried so far...   EXTRACT-sourcefield = D:\x5CHost Logs\x5C(?<newfield>[\w]+).\w+.\w+.(mfl|MFL) in source EXTRACT-sourcefield = D:\\Host Logs\\(?<newfield>[\w]+).\w+.\w+.(mfl|MFL) in source EXTRACT-sourcefield = "D:\\\Host Logs\\\(?<newfield>[\w]+).\w+.\w+.(mfl|MFL)" in source EXTRACT-sourcefield = "D:\\Host Logs\\(?<newfield>[\w]+).\w+.\w+.(mfl|MFL)" in source  
HI I would like to log network traffic for 10 servers in my environment  for period of 60 day's and analyze it later on ,Can splunk gather thi . thanks
Hello, I wrote the following query which calculates success percentage based on  expected 1 value every 5 mins which is 288 per day, but when it is less than a day or roll over multiple days , I need... See more...
Hello, I wrote the following query which calculates success percentage based on  expected 1 value every 5 mins which is 288 per day, but when it is less than a day or roll over multiple days , I need to compute the success percentage dynamically computing the expected values based on the duration divided by 5 mins and compute  success percentages, how can this be done , below is the query for 24 hrs values. Need to change dynamically based on duration so number divided will change every time earliest = -24h index=error_log  | eventstats count as Success_Count by "Properties.QueryName" | eval Success_Percentage=round(Success_Count/288*100,2) | table Success_Percentage 
In the below dataset, there are two different ISPs for the user from their usual ones. NordVPN for John and Quadranet for Jill – but this search using anomalidetection is detecting only john’s but n... See more...
In the below dataset, there are two different ISPs for the user from their usual ones. NordVPN for John and Quadranet for Jill – but this search using anomalidetection is detecting only john’s but not Jill’s. Any idea why and what is the better way to detect the ISP outlier? source="isp_data2.csv"  index="test" sourcetype="csv" | anomalydetection "ISP" "EmailAddress" action=annotate | eval isOutlier = if(probable_cause != "", "1", "0") | table "ISP" "EmailAddress", probable_cause, isOutlier | sort 100000 probable_cause     EmailAddress ISP timestamp john@example.com Comcast 1/1/2020 john@example.com Comcast 1/2/2020 john@example.com Comcast 1/3/2020 john@example.com Comcast 1/4/2020 john@example.com Comcast 1/5/2020 john@example.com Comcast 1/6/2020 john@example.com Comcast 1/7/2020 john@example.com Comcast 1/8/2020 john@example.com Comcast 1/9/2020 john@example.com Comcast ######## john@example.com Comcast ######## john@example.com Comcast ######## john@example.com Comcast ######## john@example.com NordVPN ######## john@example.com Comcast ######## john@example.com Comcast ######## john@example.com Comcast ######## john@example.com Comcast ######## john@example.com Comcast ######## john@example.com Comcast ######## john@example.com Comcast ######## john@example.com Comcast ######## john@example.com Comcast ######## jill@example.com Spectrum 2/1/2020 jill@example.com Spectrum 2/2/2020 jill@example.com Spectrum 2/3/2020 jill@example.com Spectrum 2/4/2020 jill@example.com Spectrum 2/5/2020 jill@example.com Spectrum 2/6/2020 jill@example.com Spectrum 2/7/2020 jill@example.com Spectrum 2/8/2020 jill@example.com Spectrum 2/9/2020 jill@example.com Spectrum ######## jill@example.com Spectrum ######## jill@example.com Spectrum ######## jill@example.com Spectrum ######## jill@example.com Spectrum ######## jill@example.com Quadranet ######## jill@example.com Spectrum ######## jill@example.com Spectrum ######## jill@example.com Spectrum ######## jill@example.com Spectrum ######## jill@example.com Spectrum ######## jill@example.com Spectrum ######## jill@example.com Spectrum ######## jill@example.com Spectrum ######## jill@example.com Spectrum ######## jill@example.com Spectrum ######## jill@example.com Spectrum ######## jill@example.com Spectrum ######## jill@example.com Spectrum ######## jill@example.com Spectrum ######## jill@example.com Spectrum 3/1/2020