All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I want to display the total count of events and failed events count. In my case, it is determined by the field DETAIL The following query does return the count by sourcetype "foo.bar.* and h... See more...
Hello, I want to display the total count of events and failed events count. In my case, it is determined by the field DETAIL The following query does return the count by sourcetype "foo.bar.* and having error or exception.     index=myindex sourcetype=foo.bar.* DETAIL ="*error*" OR DETAIL ="*exception*"| stats count(UNIQUE_ID) as Fail by sourcetype     Sourcetype Fail foo.bar.cat 3 foo.bar.dog 2     I tried to query using the following to determine total events for individual sourcetype and the fails for the same. But, the fails are always returned as zero.       index=myindex sourcetype=foo.bar*| eventstats count(UNIQUE_ID) as Total by sourcetype | search index=myindex sourcetype=foo.bar* DETAIL ="*error*" OR DETAIL ="*exception*"|stats count(UNIQUE_ID) as Fail by src| fields sourcetype Total Fail     Sourcetype Total Fail foo.bar.cat 153 0 foo.bar.dog 128 0   I tried using subsearch for the below query, but this one doesn't even work at all.     index=myindex sourcetype=foo.bar*| eventstats count(UNIQUE_ID) as Total by sourcetype | stats count(eval( DETAIL ="*error*" OR DETAIL ="*exception*")) as fail values(Total) as Total by src | fields src Total fail       I appreciate your time and support! Thanks!!
I need a query to find out and pin point the index queue got congested and which slow down the indexing process.
In events that we extract CID and JID from, I would like to have an output of all JID that interacted with multiple CID JID is a job ID CID is a customer ID I want to know where the same job i... See more...
In events that we extract CID and JID from, I would like to have an output of all JID that interacted with multiple CID JID is a job ID CID is a customer ID I want to know where the same job interacted with more than one customer, and would like to output it in a MV field.  I achieve roughly what I want with this: index="index-34" host="jobserver12-*" "Concurrent:" cmd="invite" | eval _raw=log | rex (cid:\s(?<cid>\d+)\s) | rex (jid:\s(?<jid>\d+)\s) | stats count values(cid) by jid But I want to know how to do this directly, I tried with mvcombine but it looks like the fields have to have the exact same values.  Both JID and CID vary. Thanks!
Hi when i ran this query:     "| tstats count, values(\"Authentication.tag\") as tag from datamodel=Authentication where ((nodename = Authentication.Failed_Authentication) \`hdsi_repeat_failed_log... See more...
Hi when i ran this query:     "| tstats count, values(\"Authentication.tag\") as tag from datamodel=Authentication where ((nodename = Authentication.Failed_Authentication) \`hdsi_repeat_failed_logins_alert_filter\`) groupby \"Authentication.src\", \"Authentication.dest\", \"Authentication.user\", _time span=1s | \`drop_dm_object_name(\"Authentication\")\` | eventstats sum(count) as src_count by src | eval user=lower(mvindex(split(user,\"@\"),0)) | search NOT [ search earliest=-24h@h tag=modify tag=password user=* NOT user=\"*$\" | eval user=lower(mvindex(split(user,\"@\"),0)) | dedup user | fields user ] | lookup hdsi_user_login_statistics.csv src, dest, user | eval p_fail_user = exact(failcountbyuser / totalcountbyuser) | eval p_fail_src=exact(failcountbysrc / totalcountbysrc) | where (p_fail_user < 1 AND ( p_fail_src > 0.05 OR p_fail_user > 0.1)) OR isnull(p_fail_user) | eval safeness=case(tag==\"privileged\", 0.25, tag==\"mail\", 6, tag==\"disabled_or_locked_out_authentication\", 8, tag==\"known_scanner_src\",20) | fillnull value=1 safeness | transaction maxspan=10m src,dest,user | stats values(dest) as dest, values(user) as user, sum(count) as eventcount, min(_time) as _time, max(duration) as duration, sum(safeness) as safeness, dc(dest) as dest_count by src | eval thresh = (safeness*30)/dest_count | where eventcount > thresh" Im getting this error: "type": "INFO", "text": "The limit has been reached for log messages in info.csv. 25 messages have not been written to info.csv. Refer to search.log for these messages or limits.conf to configure this limit." someone can help with that please ?
Hi all  Trying to build a query and struggling in "comparing" two fields.  Essentially this is what i am trying to do  1) I have logs from our online email service which has the usual details ( ti... See more...
Hi all  Trying to build a query and struggling in "comparing" two fields.  Essentially this is what i am trying to do  1) I have logs from our online email service which has the usual details ( time , source ip , email address and source logon country etc ) 2) I have a lookup in Splunk with the common Active directory details ( name, title , country etc )  What i am trying to do is to get a search to show me the logons where the two Country fields dont match  ex ( UserA logged on from Germany, his AD Details show the user is based in Germany therefore i dont want to know)  This is what i have so far :  index="email" | lookup adusers Email AS Username OUTPUT DisplayName Title Country | where "logon country" != "Country" | table Username "Source IP" "logon country" DisplayName Title Country    The "Where" statement doesn't , any ideas on how to get this working ( if its possible of course) .    
Hello everyone. Hi everyone. Does anyone know what is the contact email for splunk tech support? I need to know if I can integrate the Cortex Cloud platform to Splunk Cloud. Thanks! 
splunk machine learning toolkit - how can i write logic (from one csv file), if column 1 > column 2 shows outlier in MLTK
I am taking a Splunk course and have some onboarding questions. I would like to pay for personal tutoring. Anyone available? APP building. props and transforms I have it down fairly well but have so... See more...
I am taking a Splunk course and have some onboarding questions. I would like to pay for personal tutoring. Anyone available? APP building. props and transforms I have it down fairly well but have some specific questions my class can't address.
I'm trying to list out all dates between my time picker and have that as a column in my table. I do both things individually but just not together index="myindex" | rex "jobID (?<jobid>\d+)" | ... See more...
I'm trying to list out all dates between my time picker and have that as a column in my table. I do both things individually but just not together index="myindex" | rex "jobID (?<jobid>\d+)" | rex "dayID (?<dayid>\d+)" | eval daydt=strptime(dayid, "%Y%m%d") | eval daydt=strftime(daydt,"%Y-%m-%d") | transaction jobid dayid endswith="data consumed for jobID" |eval duration=tostring(duration,"duration") | eval status=if(closed_txn=="0","Complete","Incomplete") | appendpipe [ |gentimes start=-1|addinfo|eval date=strftime(mvrange(info_min_time,info_max_time,"1d"),"%F")|mvexpand date ] | sort -date | table date closed_txn daydt _time duration   Can someone tell me whats wrong here?
I'm looking to create a custom alert for when a host that should only be accessing a certain filepath, then reaches out to a filepath that it should not be accessing... So for example a host that sho... See more...
I'm looking to create a custom alert for when a host that should only be accessing a certain filepath, then reaches out to a filepath that it should not be accessing... So for example a host that should only be accessing C:\Documents\Newsletters\Summer2018 then accesses a seperate filepath of :\Projects\apilibrary, how can I create an alert for when the host accesses a filepath other than the one it should only be accessing?   
We have one of the dashboard in grafana and others in splunk.  Our requirment is to load the dynamic grafana dashboard page to the splunk dashboard. I have tried with the below code but it says  "url... See more...
We have one of the dashboard in grafana and others in splunk.  Our requirment is to load the dynamic grafana dashboard page to the splunk dashboard. I have tried with the below code but it says  "url refused to connect" <dashboard> <label>Services Telemetry</label> <description>Dashboard for testing purpose</description> <row> <panel> <title>Testing HTML</title> <html> <iframe src="https://stg-grafana*****></iframe> </html> </panel> </row> </dashboard> Can anyone please help me with it.
Hi All, First time poster, so not sure if this is the right location.   Looking for a public feed that would updated the "suspicious_writes_lookup".  This lookup is used for "ESCU - Suspicious Fil... See more...
Hi All, First time poster, so not sure if this is the right location.   Looking for a public feed that would updated the "suspicious_writes_lookup".  This lookup is used for "ESCU - Suspicious File Write - Rule" but only contains hidden cobra info by default.    
Hello, We are planning to migrate single instance splunk installation to clustered deployment (1 MasterNode, 1 Search Head, 2 Indexer). we are using an App with accelerated datamodels.   As per... See more...
Hello, We are planning to migrate single instance splunk installation to clustered deployment (1 MasterNode, 1 Search Head, 2 Indexer). we are using an App with accelerated datamodels.   As per my understanding we can manage all datamodels from Search Head and datamodel should be accelerated on search head only. Query 1: Can we deploy our full app with datamodels on indexers as well ? if no then what files need to be avoid deploying to indexers. when we tried deploying full app on indexers its showing all accelerated datamodels on indexers as well. which I think wrong. Query 2. Same question but for lookups. we are using many lookups as well. where should we keep all lookups ? (Search head OR indexers)   Thanks      
Hi    I was trying to delete an alert created by a user but splunk gives an error saying Could not find object id=abc   Are you sure you want to delete ab+c?   Actually,  user created 2 alerts ... See more...
Hi    I was trying to delete an alert created by a user but splunk gives an error saying Could not find object id=abc   Are you sure you want to delete ab+c?   Actually,  user created 2 alerts with name abc and other with name ab+c (They just added this + symbol extra and rest of the things were all same) Now if we try to delete ab+c alert splunk gives error mentioned above. Could anyone tell me why is this hapening and how can I delete the alert?
Hi Everyone, I am working on an addon to collect event result based for an an alert and send it to an API endpoint. Once the response is success the endpoint returns a success message in a json for... See more...
Hi Everyone, I am working on an addon to collect event result based for an an alert and send it to an API endpoint. Once the response is success the endpoint returns a success message in a json format and I Want to store it in a custom index and sourcetype. I tired using below code but the data is written to Main index instead of my custom index. Is there way to write the event in to custom index for an alert action build via Splunk Addon builder. helper.addevent("hello", sourcetype="customsource") helper.addevent("world", sourcetype="customsource") helper.writeevents(index="mycustomindex", host="localhost", source="localhost") Regards, Naresh
I'm trying to better understand the relationship of a defined lookup in Splunk (8.0.1) and its file permissions when running on Linux. We have an app containing the following: A file-lookup defini... See more...
I'm trying to better understand the relationship of a defined lookup in Splunk (8.0.1) and its file permissions when running on Linux. We have an app containing the following: A file-lookup definition, call it foo A lookup csv file, named foo.csv A scheduled saved search to modify the contents of foo lookup on some interval From observation, if the foo.csv file is given explicit permissions, say chmod 644, those permissions are preserved when appending to the csv file (| outputlookup append=true foo); however, the permissions are lost (reset to 600) when overwriting the csv file (| outputlookup append=false foo). Is there a way to preserve a lookup csv file's permissions in Linux when overwriting its contents through Splunk?
I have seen that there is a way to authenticate using the second factor of authentication through RSA and DUO. But I would like to know if there is another way and even better if it has no cost. I ... See more...
I have seen that there is a way to authenticate using the second factor of authentication through RSA and DUO. But I would like to know if there is another way and even better if it has no cost. I need to show a dashboard which can be accessed from anywhere so as not to have to be creating VPN and permissions for whoever wants to access it.
Hi, I have a table like that :  name percent AAA 90 BBB 60 70   I want to group the BBB percent in one percent. How I can do this command ?  Thanks
Hi, Currently I am facing role related issue in ITSI. I installed ITSI 4.3.1 version in Splunk enterprise 8.0.3 and after successful installation when I open the ITSI app the below error pops out s... See more...
Hi, Currently I am facing role related issue in ITSI. I installed ITSI 4.3.1 version in Splunk enterprise 8.0.3 and after successful installation when I open the ITSI app the below error pops out saying "Could not load page settings. Check that you have the proper roles and permissions. Details: Page not found!" and when I try to open other options like glass tables, deep-dive etc. it throws another error saying "Deep Dive could not be loaded. Possible cause: connection lost. Try restarting the Splunk platform. Status: 404 (Not Found) Details: Page not found!" Below is my authorise list for role_admin which looks ok but not sure why the above errors occur. Could you please help with your expertise. I have attached the screenshot as well. C:\Program Files\Splunk\bin>splunk btool authorize list role_admin [role_admin] accelerate_datamodel = enabled admin_all_objects = enabled apps_backup = enabled apps_restore = enabled change_authentication = enabled cumulativeRTSrchJobsQuota = 400 cumulativeSrchJobsQuota = 200 dispatch_rest_to_indexers = disabled edit_authentication_extensions = enabled edit_bookmarks_mc = enabled edit_cmd = enabled edit_deployment_client = enabled edit_deployment_server = enabled edit_dist_peer = enabled edit_encryption_key_provider = enabled edit_forwarders = enabled edit_health = enabled edit_httpauths = enabled edit_indexer_cluster = enabled edit_indexerdiscovery = enabled edit_input_defaults = enabled edit_local_apps = enabled edit_metric_schema = enabled edit_metrics_rollup = enabled edit_modinput_admon = enabled edit_modinput_perfmon = enabled edit_modinput_winhostmon = enabled edit_modinput_winnetmon = enabled edit_modinput_winprintmon = enabled edit_monitor = enabled edit_restmap = enabled edit_roles = enabled edit_scripted = enabled edit_search_concurrency_all = enabled edit_search_head_clustering = enabled edit_search_schedule_priority = enabled edit_search_scheduler = enabled edit_search_server = enabled edit_server = enabled edit_server_crl = enabled edit_splunktcp = enabled edit_splunktcp_ssl = enabled edit_splunktcp_token = enabled edit_tcp = enabled edit_tcp_stream = enabled edit_telemetry_settings = enabled edit_token_http = disabled edit_tokens_all = enabled edit_tokens_own = enabled edit_tokens_settings = enabled edit_udp = enabled edit_upload_and_index = enabled edit_user = enabled edit_view_html = enabled edit_web_settings = enabled edit_win_eventlogs = enabled edit_win_regmon = enabled edit_win_wmiconf = enabled edit_workload_pools = enabled edit_workload_rules = enabled get_diag = enabled grantableRoles = admin importRoles = itoa_admin;itoa_analyst;itoa_user;power;user indexes_edit = enabled install_apps = enabled license_edit = enabled license_tab = enabled license_view_warnings = enabled list_cascading_plans = enabled list_deployment_client = enabled list_deployment_server = enabled list_dist_peer = enabled list_forwarders = enabled list_health = enabled list_httpauths = enabled list_indexer_cluster = enabled list_indexerdiscovery = enabled list_pdfserver = enabled list_pipeline_sets = enabled list_search_head_clustering = disabled list_search_scheduler = enabled list_settings = disabled list_storage_passwords = disabled list_tokens_all = enabled list_win_localavailablelogs = enabled list_workload_pools = enabled list_workload_rules = enabled never_expire = enabled never_lockout = enabled read_metric_ad = disabled refresh_application_licenses = enabled rest_apps_management = enabled restart_reason = enabled restart_splunkd = enabled rtSrchJobsQuota = 100 run_collect = enabled run_debug_commands = enabled run_mcollect = enabled run_msearch = enabled schedule_rtsearch = enabled select_workload_pools = enabled srchDiskQuota = 25000 srchFilter = * srchFilterSelecting = true srchIndexesAllowed = *;_*;itsi_grouped_alerts;itsi_notable_archive;itsi_notable_audit;itsi_summary;itsi_tracked_alerts srchIndexesDefault = main srchJobsQuota = 50 srchMaxTime = 8640000 srchTimeWin = 0 web_debug = enabled write_metric_ad = disabled write_pdfserver = enabled    
I'm working on rolling out the forwarder to all my companies clients and I found the "Prepare your Windows network to run Splunk Enterprise as a network or domain user" guide. I've gone through the s... See more...
I'm working on rolling out the forwarder to all my companies clients and I found the "Prepare your Windows network to run Splunk Enterprise as a network or domain user" guide. I've gone through the steps but skipped "Change Administrators group membership on each host" , we do not have many hosts and I simply did it manually. But after I apply the GPO to the clients the users are no longer able to logon, they simply get a black screen. I've confirmed if I disable the GPO they are able to logon just fine. I can't image any reason for this so any help would be greatly appreciated. I've attached a screen shot of the GPO I created and all our users are running Win 10 with the latest updates.