All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to make a few changes to this splunkbase app (github link)  to make it Splunk Cloud compatible. The app seems to need a few very small changes considering the recent requirement for jQu... See more...
I am trying to make a few changes to this splunkbase app (github link)  to make it Splunk Cloud compatible. The app seems to need a few very small changes considering the recent requirement for jQuery 3.5. Appinspect gives the following report. First error should be fixed by just adding "version=1.1" in the xml dashboard. Second error also seems fairly simple. But I am not exactly sure where to find these dependencies and how to add them in the app.  Any help please?
I am trying to create a setup page which lets user add multiple accounts. ( The number of accounts to be added is unknown so I decided to use the Global Account options for this) I want to get... See more...
I am trying to create a setup page which lets user add multiple accounts. ( The number of accounts to be added is unknown so I decided to use the Global Account options for this) I want to get the value of the 'Account Name' column in the python script generated by add on builder called 'modalert_appname.py'. By default, the script comes with a helper function called helper.get_user_credential("<account_name>"). The problem is that this function takes in the parameter 'username' and I want to use a function which takes in the 'account_id' as its argument. Technically, this would be possible by using the method helper.get_user_credential_by_id(account_id) as mentioned in the python helper function documentation. But when i try to use this function, it gives me the error saying that 'object has no attribute' (meaning that it is not defined for this helper function). Any ideas about how to overcome this problem?
Hi Team, I am looking to get incremental count of some data in dashboard. For example : If the count for a certain table for past 5 days is 500 , on day 6 I need to get incremental data. that is ... See more...
Hi Team, I am looking to get incremental count of some data in dashboard. For example : If the count for a certain table for past 5 days is 500 , on day 6 I need to get incremental data. that is 5 days data + 6th day data. Likewise i need to capture for a week . Kindly assist 
i have  a dashboard, is there a way to move that dashboard studio screen to another server? please provide some documentation.
Hello everyone, I have set an Adaptive Response Action (custom bash script) along with a Notable event on a simple correlation search. The Notable triggers but the script not. The script is used ... See more...
Hello everyone, I have set an Adaptive Response Action (custom bash script) along with a Notable event on a simple correlation search. The Notable triggers but the script not. The script is used to initiate a tcpdump capture on an indexer. The script is placed under: - /opt/splunk/etc/apps/SplunkEnterpriseSecurity/bin/tcpdump.sh - /opt/splunk/bin/scripts/tcpdump.sh Owner: splunk  Permissions: 755 tcpdump.sh   #!/bin/bash #Initiate tcpdump (3 dumps for 5mins each) tcpdump -i ens33 -G 300 -W 3 -w /mnt/nfs/pcaps/pcap-%Y-%m-%d_%H.%M.%S   I tried to create an app with an Adaptive Response Action with Addon-Builder but my coding skills are not good. How can I troubleshoot why the script is not running at all? Thanks Chris
Hi Team,  I have the following result in place with 30min bucket using stats values() and then xyseries  time            field1                   field2                      field3               ... See more...
Hi Team,  I have the following result in place with 30min bucket using stats values() and then xyseries  time            field1                   field2                      field3                   field4 05:30 4,10,11,12,30 1,13,14,9,8,7 5,7,3,8,9,1,55 23,24,17,18,19 06:00 19,10,11,12,30 12,3,14,9,8,7 1,17,3,8,1,34 22,2,25,17,18,19 06:30 20,10,11,12,55 11,13,14,9,18,7 10,7,3,8,9,1,4 23,24,26,1,18,49 07:00 21,10,11,12,44 12,13,17,9,7 6,7,3,9,1,23 23,24,25,17,18,19 07:30 31,10,11,12,50 1,13,14,9,8,7 5,7,3,8,9,11 23,24,25,17,18,19 08:00 1,10,11,12,30,88 12,13,14,9,81 5,7,3,8,9,17 23,24,25,17,18,19 08:30 1,10,11,12,30,99 12,13,14,9,81 5,7,3,8,9,18 23,24,25,17,18,19 09:00 1,11,12,30,23 11,1,14,9,7 10,7,3,8,9,18 23,24,25,17,18,19 09:30 1,10,11,12,300 12,13,4,9,8,7 4,7,3,8,9,1 23,24,25,17,18,19   Currently the result shows all the values for each field. What I am looking here is the top 3 values which has maximum count for each field, not sure how to pull that result. Request someone to guide.
Hello everyone, I am trying to create a custom alert action where tcpdump capture will be triggered for the event's src and dest IPs. I created a simple bash script for that:    #!/bin/bash #... See more...
Hello everyone, I am trying to create a custom alert action where tcpdump capture will be triggered for the event's src and dest IPs. I created a simple bash script for that:    #!/bin/bash #Initiate tcpdump (3 dumps for 5mins each) tcpdump -i ens33 -G 300 -W 3 -w /mnt/nfs/pcaps/pcap-%Y-%m-%d_%H.%M.%S   My problem is that this does not contain the src and dest IPs of the correlation event triggered. How can I pass these variables here in order not to capture the whole traffic, but only the one related between these two hosts? Thanks Chris 
hello I count results by _time in a table panel like this and it works perfectly When the results is 0 the result is displayed only once there is a result on the bin _time after For example, if... See more...
hello I count results by _time in a table panel like this and it works perfectly When the results is 0 the result is displayed only once there is a result on the bin _time after For example, if at 7h, the result is = 0 but at 8h the result is = 1, the results for 7h and 8h are correctly displayed But as long as the result is 0, nothing is displayed     index=toto | bin span=1h _time | stats count as Pb by s _time | search Pb >= 3 | timechart dc(s) as s span=1h | where _time < now() | eval time = strftime(_time, "%H:%M") | stats sum(s) as nbs by time | rename time as Heure     so I tried this for displaying results = 0 but it doesnt works could you help please?     | eval nbs =if(isnull(nbs, 0, nbs)      
Hi, what's the latest version of splunk thats free? Thanks!
Hi i want to extract the mac_algorithms field with regex from a nmap scan result. Does anyone have an idea how it works best? I've tried a few things, not all fields are found in Splunk.  Here you ... See more...
Hi i want to extract the mac_algorithms field with regex from a nmap scan result. Does anyone have an idea how it works best? I've tried a few things, not all fields are found in Splunk.  Here you can see my example: https://regex101.com/r/eJ16fA/1 Here my nmap-scanning example: kex_algorithms: (8) curve25519-sha256@libssh.org ecdh-sha2-nistp256 ecdh-sha2-nistp384 ecdh-sha2-nistp521 diffie-hellman-group-exchange-sha256 diffie-hellman-group-exchange-sha1 diffie-hellman-group14-sha1 diffie-hellman-group1-sha1 server_host_key_algorithms: (4) ssh-rsa ssh-dss ecdsa-sha2-nistp256 ssh-ed25519 encryption_algorithms: (14) aes128-ctr aes192-ctr aes256-ctr arcfour256 arcfour128 chacha20-poly1305@openssh.com aes128-cbc 3des-cbc blowfish-cbc cast128-cbc aes192-cbc aes256-cbc arcfour rijndael-cbc@lysator.liu.se mac_algorithms: (19) hmac-md5-etm@openssh.com hmac-sha1-etm@openssh.com umac-64-etm@openssh.com umac-128-etm@openssh.com hmac-sha2-256-etm@openssh.com hmac-sha2-512-etm@openssh.com hmac-ripemd160-etm@openssh.com hmac-sha1-96-etm@openssh.com hmac-md5-96-etm@openssh.com hmac-md5 hmac-sha1 umac-64@openssh.com umac-128@openssh.com hmac-sha2-256 hmac-sha2-512 hmac-ripemd160 hmac-ripemd160@openssh.com hmac-sha1-96 hmac-md5-96 compression_algorithms: (2) none zlib@openssh.com"
Hello I use 2 separate search almost identical Now I want to merge these 2 search in one search Here is the search   index=toto sourcetype="cit" type=* earliest=@d+7h latest=@d+19h | field... See more...
Hello I use 2 separate search almost identical Now I want to merge these 2 search in one search Here is the search   index=toto sourcetype="cit" type=* earliest=@d+7h latest=@d+19h | fields citrtt s | bin span=1h _time | search citrtt > 150 | stats count as PbPerf by s _time | search PbPerf >= 2 | timechart dc(s) as s span=1h | where _time < now() | eval time = strftime(_time, "%H:%M") | stats sum(s) as nbs by time | rename time as Heure       index=toto sourcetype="cit" type=* earliest=@d+7h latest=@d+19h | fields citnet s | bin span=1h _time | search citnet > 80 | stats count as PbPerf by s _time | search PbPerf >= 2 | timechart dc(s) as s span=1h | where _time < now() | eval time = strftime(_time, "%H:%M") | stats sum(s) as nbs by time | rename time as Heure   could you help please?
Hi All   I can not able to see events from zoom, however when i search in _internal index I can see count of events showing from soure=zoom_input.log. Please help me, i have set up the integrat... See more...
Hi All   I can not able to see events from zoom, however when i search in _internal index I can see count of events showing from soure=zoom_input.log. Please help me, i have set up the integration as per given in splunk doc and my input port 9997 is listining too.        
Hi Team,    I would like to learn building splunk queries and dashboard. Is there any link or splunk page available where we can learn it via lab exercises or scenario based.. 
Hi splunker, Please tell me the setting that the blue frame and menu are not displayed when the users click object in Dashboard Studio. In Dashboard Studio, when users click dashboard's obo... See more...
Hi splunker, Please tell me the setting that the blue frame and menu are not displayed when the users click object in Dashboard Studio. In Dashboard Studio, when users click dashboard's oboject (e.g. Map, Single Value, table, image or etc..), blue frame around its object and download/search/full screen menu apear.   There are some cases that are a little annoying, so I would like to prevent the blue frame and menu from being displayed. Is this possible?
We have many completely diff events. Sometimes, we got a result based on Search 1. But we want to exclude some records from results of Search 2. E.G., the below are events   event1: {name: mike... See more...
We have many completely diff events. Sometimes, we got a result based on Search 1. But we want to exclude some records from results of Search 2. E.G., the below are events   event1: {name: mike, action: ok} event2: {name: cool, action: ok} event3: {name: mike, state: invalid}     My first search1 shall get event1, event2   ok | table name, action   My second search2 shall get event3   invalid | fields name   However, Search 1 shall exclude the result of Search 2, that is, only event 2 shall be gotten I am trying to achieve by   ok | search NOT name IN ([invalid | fields name]) | table name, action   But obviously it is unsuccessful Thanks for any advice
is there anyway to setup something on a dashboard that will tell me if a service on a remote centos box goes down. I run arkime on a centos8 box and i want a dashboard in splunk that will show me t... See more...
is there anyway to setup something on a dashboard that will tell me if a service on a remote centos box goes down. I run arkime on a centos8 box and i want a dashboard in splunk that will show me the status of the services arkimecaprute.service arkimeviewer.service elasticsearch.service is this possible?
JSON field=value pairing i have a log with single field name TestCategories and has multiple values in it like--x,y,z,.... how can i split it and create individual fields like TestCategories1: x ... See more...
JSON field=value pairing i have a log with single field name TestCategories and has multiple values in it like--x,y,z,.... how can i split it and create individual fields like TestCategories1: x and TestCategories2: y TestCategories: x,y,z,
I just built my first lookup table, because I have a csv of about 200 servers with the in different ip spaces and I need to perform 2 things . 1. confirm the ip's in the csv's are in splunk and 2. di... See more...
I just built my first lookup table, because I have a csv of about 200 servers with the in different ip spaces and I need to perform 2 things . 1. confirm the ip's in the csv's are in splunk and 2. display per ip what ports are listening. So my query has been this  index=* |stats count by src_ip , dest_port [|inputlookup networkservers.csv | fields "IPv4 Address" | rename "IPv4 Address " as query I have confirmed the lookup table is there and I can see it , and I can query the network, im just having issues with ingesting the 200+ ips as search items and then marrying the ports and prots with it . thanks in advance if this makes sense or am i looking at it all wrong ?
Actual log:- [{area: "CU", subid: "M", slgdattim: "2022022109515500", slgproc: "1362100032D2", slgmand: "200", sid: "S4D", instance: "dummy", counter: "0 ", slgltrm2: "jhuytvfghju789", slguser: "r4... See more...
Actual log:- [{area: "CU", subid: "M", slgdattim: "2022022109515500", slgproc: "1362100032D2", slgmand: "200", sid: "S4D", instance: "dummy", counter: "0 ", slgltrm2: "jhuytvfghju789", slguser: "r4uhjk8", slgtc: "SE38", dummy3: "Z0097TORRENT_INBOUND", term_ipv6: "167.211.0.9", sal_data: "Jump to BAP Debugger by user C7C2QC(200): SourceLine:233->228, ByteCode(offset):MOVL(433)->mvqk(423) (Prgm:Z0097TORRENT_INBOUND, Meth:READ_DATA, Incl:Z0097TORRENT_INBOUND_M01, Line:228)"}, {area: "CU", subid: "M", slgdattim: "2022022111111200", slgproc: "0797100035D2", slgmand: "200", sid: "S4D", instance: "dummy", counter: "0 ", slgltrm2: "PS2WVV212651CF", slguser: "C6W7RZ", slgtc: "VKM1",dummy2: "D01", term_ipv6: "167.211.0.6", sal_data: "Jump to BAP Debugger by user C6W7RZ(200): SourceLine:48->63, ByteCode(offset):cmpr(2588)->cmpr(2603) (Prgm:PDB, Form:USER_EXIT_FUELLEN, Incl:ffujlkk_ADD_FIELDS, Line:63)"}, {area: "CU", subid: "M", slgdattim: "2022022111122600", slgproc: "0797100035D2", slgmand: "200", sid: "S4D", instance: "dummy", counter: "0 ", slgltrm2: "PS2WVV212651CF", slguser: "C6W7RZ", slgtc: "VKM1", dummy1: "D01", term_ipv6: "167.211.0.6", sal_data: "Jump to BAP Debugger by user C6W7RZ(200): SourceLine:63->77, ByteCode(offset):cmpr(2603)->cmpr(2650) (Prgm:APDB, Form:USER_EXIT_FUELLEN, Incl:hgguytol_ADD_FIELDS, Line:77)"}] I want the above log break into three events like below & pls let me now the props.conf Event1:- [{area: "CU", subid: "M", slgdattim: "2022022109515500", slgproc: "1362100032D2", slgmand: "200", sid: "S4D", instance: "dummy", counter: "0 ", slgltrm2: "jhuytvfghju789", slguser: "r4uhjk8", slgtc: "SE38", dummy3: "Z0097TORRENT_INBOUND", term_ipv6: "167.211.0.9", sal_data: "Jump to BAP Debugger by user C7C2QC(200): SourceLine:233->228, ByteCode(offset):MOVL(433)->mvqk(423) (Prgm:Z0097TORRENT_INBOUND, Meth:READ_DATA, Incl:Z0097TORRENT_INBOUND_M01, Line:228)"}, {area: "CU", subid: "M", slgdattim: "2022022111111200", slgproc: "0797100035D2", slgmand: "200", sid: "S4D", instance: "dummy", counter: "0 ", slgltrm2: "PS2WVV212651CF", slguser: "C6W7RZ", slgtc: "VKM1",dummy2: "D01", term_ipv6: "167.211.0.6", sal_data: "Jump to BAP Debugger by user C6W7RZ(200): SourceLine:48->63, ByteCode(offset):cmpr(2588)->cmpr(2603) (Prgm:PDB, Form:USER_EXIT_FUELLEN, Incl:ffujlkk_ADD_FIELDS, Line:63)"}, {area: "CU", subid: "M", slgdattim: "2022022111122600", slgproc: "0797100035D2", slgmand: "200", sid: "S4D", instance: "dummy", counter: "0 ", slgltrm2: "jhuytvfghju789", slguser: "C6W7RZ", slgtc: "VKM1", dummy1: "D01", term_ipv6: "167.211.0.6", sal_data: "Jump to BAP Debugger by user C6W7RZ(200): SourceLine:63->77, ByteCode(offset):cmpr(2603)->cmpr(2650) (Prgm:APDB, Form:USER_EXIT_FUELLEN, Incl:hgguytol_ADD_FIELDS, Line:77)"}]
I am updating a CSV on disk via the search api using outputlookup.  Each time I run my script using the same source CSV, the CSV on disk is updated correctly. It shows the correct number of rows. W... See more...
I am updating a CSV on disk via the search api using outputlookup.  Each time I run my script using the same source CSV, the CSV on disk is updated correctly. It shows the correct number of rows. When I try to read the CSV via UI or the /jobs & /export API  it doesn't return the correct number of rows approximately 40% of the time.  I run my script 10 times, and 6 times it's a perfect match, the row count is what I expect it to be, 4 times it fails with random number of rows returned.   Each time, the number of rows in the CSV on disk is correct though.   From what I can see, I am not hitting quotas or limits and it doesn't seem to be related to system load. There isn't anything obvious to me in any logs either.  My flow is like this: API call with append=f to overwrite the CSV/set column names Multiple API calls with append=t to add data (10 rows of data per call) Final search to ensure the expected number of rows match My search to validate is like this, and this is where the inconsistencies happen:   | inputlookup my_new_file.csv | table <column name, column name....>   My CSV is currently 110 columns, and 5625 rows.  When it works, search returns the correct number of rows each time. When it doesn't work,  it consistently returns the wrong number.  To clarify, if i get 500 rows instead of the expected 5625, every time i search i will get 500 rows until I re-run my script to update the CSV on disk.    Thanks