All Topics

Top

All Topics

hi  we had the user with name user1, after some days the user was hidden in settings>users, but still user can login to splunk web. and we can't find user in users.
Hi,   I have two values in a field named (Online Booking) and want to display the same in tabular format and I tried eval, stats command but not giving the expected result.   Question : Total cou... See more...
Hi,   I have two values in a field named (Online Booking) and want to display the same in tabular format and I tried eval, stats command but not giving the expected result.   Question : Total count to find out the option to book table     and the output that i am expecting is   Please let me know which command needs to be used and also reference books for learning development, I am completely new to Splunk
Hi all!  I've populated a table dynamically based on a drop down a user can manipulate using...    | inputlookup [| rest /services/saved/searches | fields title | search title=*$use... See more...
Hi all!  I've populated a table dynamically based on a drop down a user can manipulate using...    | inputlookup [| rest /services/saved/searches | fields title | search title=*$userInput$*.c | eval check=replace(title,"\.c",".csv") | return $check]   Additionally I can extrapolate each column into a dropdown using...   | inputlookup [| rest /services/saved/searches | fields title | search title=*$userInput$*.c | eval check=replace(title,"\.c",".csv") | return $check] | eval value= [| inputlookup [| rest /services/saved/searches | fields title | search title=*$userInput$*.c | eval check=replace(title,"\.c",".csv") | return $check] | transpose | fields column | streamstats count(column) as order | where order = 2 | return $column] | fields value | table value | stats values(value) as value | mvexpand value   by varying order = #  The issue I'm running into now is pairing the drop down,  the drop down with a set token of "filter1" Run within    | inputlookup [| rest /services/saved/searches | fields title | search title=*$userInput$*.c | eval check=replace(title,"\.c",".csv") | return $check] | search [| inputlookup [| rest /services/saved/searches | fields title | search title=*$userInput$*.c | eval check=replace(title,"\.c",".csv") | return $check] | head 1 | table * | transpose | head 1 | return $column] = $filter1$     The token is not functioning and will not populate in the query, while other tokens will. I've additionally confirmed the query works, just not when used with tokens within a dashboard.  To summarize, the overall goal is to provide 4-6 drop downs and a table. Based on the initial user request, this table would populate with data, and each drop down would then be able to further filter data,  with the field names and values all varying. If something isn't clear please ask and I'd be glad to provide, been banging my head against this one for awhile lol.
Hello Team, hope you are doing well. I really need your support to the issue ,I have experienced about logs not received from syslog sender devices  into Splunk instance. before logs were received... See more...
Hello Team, hope you are doing well. I really need your support to the issue ,I have experienced about logs not received from syslog sender devices  into Splunk instance. before logs were received, but today no logs are coming,  #I have checked splunk forwarders i found is running also checked splunkd it is also running,   But also I found error but ii don't know if this is the root cause that cause this matter, Below is the issue I found when I check the status, AND even when I do systemctl restart splunk-suf.service this doesn't work, still it gives me failed status! bash-4.2$ systemctl status splunk-suf.service * splunk-suf.service - splunk Universal Forwarder service Loaded: loaded (/etc/systemd/system/splunk-suf.service; enabled; vendor preset:disabled) Active: failed (Result: start-limit) since Sat 2021-09-25 11:28:14 CAT; 3min 3s ago Process: 58723 ExecStart=/opt/splunkforwarder/bin/splunk _internal_launch_under_systemd --accept-license --no-prompt --answer-yes (code=exited, status=1, FAILURE) Main PID: 58723 (code=exited, status=1/FAILURE) ***Kindly help me on how I may solve this issue and share with me the troubleshooting CLI commands to check why receiver Splunk instance are not receiving logs? ** I want to check also if the firewall is not blocking anything, what different command to use?  Or any other advice that may help me to resolve this? **MY OS: Centos, Splunk enterprise Kindly help me on this matter, and share with me other command I can use to troubleshooting this and how i can fix this? Thank you in advance.  
I have a macro that adds a backslash to an existing backslash:   [backslash(1)] args = arg definition = replace("$arg$", "(\\\\)", "\\\\\\\\") iseval = 1    This works:   index=perfmon counter=... See more...
I have a macro that adds a backslash to an existing backslash:   [backslash(1)] args = arg definition = replace("$arg$", "(\\\\)", "\\\\\\\\") iseval = 1    This works:   index=perfmon counter=`backslash(\processor)`    This fails when the arg has spaces:   index=perfmon counter=`backslash("\processor time")`   The expanded search string:   (counter=\\processor index=perfmon time)   How do I get: index=perfmon counter="\\processor time" Oh please show me my stupidity as I have been banging my head on the desk for hours... 
We upgraded some apps and add-ons the other day, and ever since then, we're running into issues with searches breaking and getting error messages stating, "Problem replicating config (bundle) to sear... See more...
We upgraded some apps and add-ons the other day, and ever since then, we're running into issues with searches breaking and getting error messages stating, "Problem replicating config (bundle) to search peer ' <X.X.X.X:8089>', Upload bundle="E:\Splunk\var\run\<bundle_id>.bundle" to peer name=<indexer> uri=https://X.X.X.X:8089 failed; http_status=400 http_description="Failed to untar the bundle="E:\Splunk\var\run\searchpeers\<bundle_id>.bundle". This could be due Search Head attempting to upload the same bundle again after a timeout. Check for sendRcvTimeout message in splund.log, consider increasing it.". We increased the sendRcvTimeout in distsearch.conf on our search heads from the default of 60 to 300, but we're still getting this. Most of our searches come back with 0 results.   Has anyone come across this before? I haven't really seen other posts with that "failed to untar" error message. I looked through some of the add-ons and apps that are included in the bundles, but I didn't see any really large lookup .csv or files in the /bin directory.  (I did see one add-on mentioned in splunkd.log that said "File length is greater than 260, File creation may fail" followed by the untar immediately failing - I am investigating this further.) Our architecture includes a search head deployer with 4 SHs, index cluster master with 4 indexers and a deployment server.  Thank you for any advice!
Hi Splunkers, Can you pls share procedure to upgrade Microsoft Azure Add on for Splunk from 2.x to 3.x. We have upgraded splunk version on 8.1. We tried to find procedure on splunk site but couldnot... See more...
Hi Splunkers, Can you pls share procedure to upgrade Microsoft Azure Add on for Splunk from 2.x to 3.x. We have upgraded splunk version on 8.1. We tried to find procedure on splunk site but couldnot found. Please assist us. Thanks & Regards, Abhijeet B.
index=test sourcetype=test_access tag=prod server_name!="www.test.com" earliest=-4h latest=now | timechart eval(avg(request_time)*1000) as "Today" | appendcols [search index=test sourcetype=test_acce... See more...
index=test sourcetype=test_access tag=prod server_name!="www.test.com" earliest=-4h latest=now | timechart eval(avg(request_time)*1000) as "Today" | appendcols [search index=test sourcetype=test_access tag=prod server_name!="www.test.com" earliest=-7d-4h latest=-7d | timechart eval(avg(request_time)*1000) as "LW1"] | appendcols [search index=test sourcetype=test_access tag=prod server_name!="www.test.com" earliest=-14d-4h latest=-14d | timechart eval(avg(request_time)*1000) as "LW2"]base-search
Hi folks, I am creating a Splunk dashboard and have some questions regarding the multiselect input. 1. I want to add a special option `all`, when user selects it, means all options are selected. So... See more...
Hi folks, I am creating a Splunk dashboard and have some questions regarding the multiselect input. 1. I want to add a special option `all`, when user selects it, means all options are selected. So I added a static option `all`, but I can select both `all` and any other options, makes it looks odd, so my first question is how to make `all` option either exclusive of other options, or when i select `all`, all options will be selected automatically (except `all`)? 2. For the multiselect input, I am currently using is int a `WHERE` clause: `| where $multiselect_roles$`, currently the configuration of multiselect is: it means the interpolated clause looks like: `| where role_name="value1" OR role_name="value2"`, my second question is when `all` is selected, how can I either emit the whole `WHERE` clause, or make it trivial, means the `WHERE` clause is there but actually it doesn’t filter anything? I tried to give the `all` option an empty string, or a `*` but both don’t work.  3. When populating the dynamic options of multiselect from query, I want to reference other inputs as query parameters. For example, I already added an input whose token name is `environment` and another time range input, I want to only get distinct values of a column from the given environment and time range, like this: `from_index_distapps` sourcetype="xyz" "request=" earliest=$time_range$ | rex field=message "request=\"(?[^}]+})" | eval arjson=replace(arjson, "\\\\\"", "\"") | spath input=arjson | where environment=$environment$ | table role_name | dedup role_name How to correctly reference other inputs here? 
Hello, I have the query : hostalias=$hostname$ AND actor AND total | timechart span=1s count by actor | stats   This returns the stats for all the actors into a row, but I wanted to have a table ... See more...
Hello, I have the query : hostalias=$hostname$ AND actor AND total | timechart span=1s count by actor | stats   This returns the stats for all the actors into a row, but I wanted to have a table where each row indicates a specific actor and the resulting max/avg/p50/p99 statistics for that actor. Something like below: Actor max avg p50 p99 actorName1         actorName2         actorName3             I tried the following query, but nothing returned: hostalias=$hostname$ AND actor AND total | timechart span=1s count as TPS | stats max(TPS) as maxTPS avg(TPS) as avgTPS p50(TPS) as p50TPS p99(TPS) as p99TPS by actor   I had something similar working before, but there was no timechart involved. Is this possible to do with timechart?   Thanks for any insights  
Hi I want to know how long and when either of two games are being played on the PS4 or a laptop and be notified via email the IP address, when the game play started and when the game play stopped a... See more...
Hi I want to know how long and when either of two games are being played on the PS4 or a laptop and be notified via email the IP address, when the game play started and when the game play stopped and the duration the game was played. There are multiple game play sessions during the day. I want to be able to graph game play by day and week also. I am using squid proxy and the destination traffic for both games is known for example api.gamesite1.com for game 1 and api.gamesite2.com for game 2 and the traffic is initiated from the PS4 or laptop every 14 seconds on average and when the game is stopped playing the traffic stops appearing. Multiple sessions of either game could be played during the day so I want to capture each game session the source IP address, start and finish time and duration between start and finish time.  Can anyone help how to do this?
Hi, Team! I have a rule: index = example source = "Rule" | fields user, src_time, src_app, src, src_lat, src_long, src_city, src_country, dest_time, dest_app, dest, dest_lat, dest_long, dest_city... See more...
Hi, Team! I have a rule: index = example source = "Rule" | fields user, src_time, src_app, src, src_lat, src_long, src_city, src_country, dest_time, dest_app, dest, dest_lat, dest_long, dest_city, dest_country, distance, speed | stats count by dest, dest_app And there is a lookup, in which the ip addresses and apps are indicated in two columns. How can I exclude from the rule a bunch of ip and applications that are in the lookup?   Thank you for your time!  
I have a search that counts the amount of times a user runs a program, and then returns the usernames of the users who run it more than threshold 'x_times'. The search requires information from two s... See more...
I have a search that counts the amount of times a user runs a program, and then returns the usernames of the users who run it more than threshold 'x_times'. The search requires information from two sourcetypes that have one 'event_id' in common.     index=blah (sourcetype=example_1 OR sourcetype=example_2) earliest=-1d@d | stats values(username), values(type), values(program_name), values(hostname) by event_id | search type=filter_1 program_name=filter_2 | eventstats count as user_count by username | search user_count >= 5 | table username      My goal, is that after I have the usernames I'm interested in, I want to be able to run predictions on each username. I have tried this two ways: Way 1     | map search="search index=blah (sourcetype=example_1 OR sourcetype=example_2) earliest=-90d@d | stats values(username), values(type), values(program_name), values(hostname) by event_id | search username=$username$ type=filter_1 program_name=filter_2 | timechart span=1d count as distinct_count | predict disinct_count algorithm=LLP5 | table output1, output2, prediction"     Way 2     | map search="search index=blah sourcetype=example_1 earliest=-90d@d | join event_id max=0 [ search index=blah sourcetype=example_2 earlies=-90d@d | table event_id, program_name] | search username=$username$ type=filter_1 program_name=filter_2 | timechart span=1d count as distinct_count | predict disinct_count algorithm=LLP5 | table output1, output2, prediction"      Experimentation has shown me that the map search can't seem to process past the STATS or the JOIN. Without the STATS or the JOIN, and then only being able to use one filter, I can get it to work. But with them it fails, and I'm unable to get an accurate prediction. Is there a solution, or do I need to approach this a different way?
Hi Can someone please guide me how to Schedule PDF Delivery in Splunk dashboard studio.   Thanks in advance.
I recently setup Security Essentials for reporting on common ransomeware extensions. I received my first alert but it tagged a bunch of MP3s as TeslaCrypt3.0+  I have confirmed the files are not r... See more...
I recently setup Security Essentials for reporting on common ransomeware extensions. I received my first alert but it tagged a bunch of MP3s as TeslaCrypt3.0+  I have confirmed the files are not ransomeware. Is there a reason for this? how can I limit the amount of false positives? 
Hey guys, So I have two look up tables table1 and table 2.   Table 1 ID Username Fname Lname Table 2 Username   What i want to do is have my search result look like this   ID, Username(from ... See more...
Hey guys, So I have two look up tables table1 and table 2.   Table 1 ID Username Fname Lname Table 2 Username   What i want to do is have my search result look like this   ID, Username(from table one), Fname, Lname, Username(table two) 54, User1, John, Smith, User1   The reason i want it to do that is because i want to compare the username from table 1 to table 2 so that i can know the user is missing from the source we're getting the table 2 from. I was able to get append to work but the issue i run into is it wont place the usernames in the same row. It shows all the values for table one fills the columns and then shows all the values for table 2 with the table one columns. Ex:   ID, Username(from table one), Fname, Lname, Username(table two) 54, User1, John, Smith, (blank) 55, User2, Jane, Smith, (blank) (blank),(blank),(blank),(blank),User1 (blank),(blank),(blank),(blank),User2     I just want the usernames from table 1 to match and be in the same row as the username in table2
Hello Team Please join us on Saturday, September 25th 11:00 AM for our next Mumbai Splunk User Group meet-up. I will be presenting Splunk Observability to collect all metrics and metadata logs from ... See more...
Hello Team Please join us on Saturday, September 25th 11:00 AM for our next Mumbai Splunk User Group meet-up. I will be presenting Splunk Observability to collect all metrics and metadata logs from AWS. https://usergroups.splunk.com/e/m8sas8/  
There are no data on Mondays so my timecharts always dip to 0.   {search string} | eval date_wday=lower(strftime(_time,"%A")) | where NOT (date_wday=monday) | timechart span=1d count by ColName   ... See more...
There are no data on Mondays so my timecharts always dip to 0.   {search string} | eval date_wday=lower(strftime(_time,"%A")) | where NOT (date_wday=monday) | timechart span=1d count by ColName   Is there any way to make the timechart skip Mondays (not just set it to 0)? 
I have an alert that joins RAW events with a lookup containing thresholds (and yes, it has to be a join).  I would like to take one field from the alert details, last_file_time, and outputlookup only... See more...
I have an alert that joins RAW events with a lookup containing thresholds (and yes, it has to be a join).  I would like to take one field from the alert details, last_file_time, and outputlookup only that field back to the root lookup table. Question: Is there a way to only outputlookup a single field from the table output? Example: | inputlookup MyFileThresholds.csv | join type=left file_name [ search ....... eval last_file_time=strftime(_time, "%x %T") ] | table monitor_status current_time current_day file_name file_cutoff_time host last_file_time | outputlookup append=false MyFileThresholds.csv  <-I only want last_file_time going back to the root lookup table.
Hey guys. I have multiple events combined to transactions. I'd like to view the duration of each transaction on a timechart to have an overview about when and how long which transaction occured. My ... See more...
Hey guys. I have multiple events combined to transactions. I'd like to view the duration of each transaction on a timechart to have an overview about when and how long which transaction occured. My search so far is: searchterms | eval start_time = if(like(_raw, "%START%"), 'start', 'null') | eval end_time = if(like(_raw, "%END%"), 'end', 'null') | transaction JobDescription startswith=(LogMessage="* START *") endswith=(LogMessage="* END *") maxevents=5000 | timechart [pls help] I'm pretty lost on that case, so help is very appreciated