All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi ALL!! Help me on how I can use the table function in query with percent |table  field-1, field-2, field-3  |stats count by  field-1, field-2, field-3 | eval  percentage=round(count/total*100,2)... See more...
Hi ALL!! Help me on how I can use the table function in query with percent |table  field-1, field-2, field-3  |stats count by  field-1, field-2, field-3 | eval  percentage=round(count/total*100,2)."%" |fields - total  This is the query I used but there's no percentage columns appeared, what query can I use to displ Kindly help me out???? here's the column i want to have: host       signature   action    count     percentage    I need your help?   - Other Question , what makes the query to take long time loading or more time to display after pressing search button? Thanking you in advance.          
why windows deployment server is not sending serverclass to linux deployment client? any idea if we need to make any additional configuration?
  -- index=_internal sourcetype=scheduler alert_actions=email status=success savedsearch_name="Okta_ResearchCenter_login_data_*" Above query states splunk alert action is success. -- index=_intern... See more...
  -- index=_internal sourcetype=scheduler alert_actions=email status=success savedsearch_name="Okta_ResearchCenter_login_data_*" Above query states splunk alert action is success. -- index=_internal source=*python.log 2020-11-17 12:45:38,888 +0530 ERROR sendemail:475 - (552, '5.3.4 Message is too long.') while sending mail to: abc@xyz.com 2020-11-17 12:45:38,887 +0530 ERROR sendemail:142 - Sending email. subject="Splunk Report: Okta_ResearchCenter_login_data_SeptemberReport", results_link="https://abc-sh111.com/app/search/@go?sid=scheduler_dbmFnYXNyabkBiY2cuY29t__search__RMD54e9f54654ca73_at_1605300_47656_D3F84F-9527-4D98-AAC2-62D9B23BF8D2", recipients="[u'abc@xyz.com']", server="email-smtp.us-east-111.aws.com:587" -- settings in action.action_history.maxresults and action.email.maxresults are changed to 1000000 -- O/p of the query is 63,074 lines
HI Team, I'm new to splunk..could  How to check the different activities by listed users(ex: 10 users) from single query 1)password failure 2)Malware operations/malicious file 3)Traffic towards ... See more...
HI Team, I'm new to splunk..could  How to check the different activities by listed users(ex: 10 users) from single query 1)password failure 2)Malware operations/malicious file 3)Traffic towards malicious public IP 4)Suspicious mail activity
I am trying to determine the the successful UF deployments other than an incremental count from the forwarder manager. I am using an inputlookuP(CSV) for an initial search which returns me the hosts ... See more...
I am trying to determine the the successful UF deployments other than an incremental count from the forwarder manager. I am using an inputlookuP(CSV) for an initial search which returns me the hosts from the inputlookup. I also want to list from the same inputlookup hosts that have not returned a result in the same table. my search that works thus far. index="_internal" source="*metrics.lo*" group=tcpin_connections fwdType=uf [ |inputlookup winunixtestinput.csv| fields hostname ]| dedup hostname| table hostname,sourceIp,fwdType,guid,version,build,os,arch I do not seem to be able to add the seconds part of the search to list the hosts that didn't return.  
Hello, I am using HTTP Event collector for one of the sources. And currently the data is getting indexed every Monday that is weekly once. Where can i change this interval settings ? Thanks
Hi, I've a weird requirement from one of my stakeholders. So I have this sales application which contains many flows starting with landing page till checkout. All these pages count and their respons... See more...
Hi, I've a weird requirement from one of my stakeholders. So I have this sales application which contains many flows starting with landing page till checkout. All these pages count and their response time is being logged in my access logs. There is no session ID available in the logs because of reverse proxy and only clientip is available. What I want to find out is  How much time a journey took for a customer If there was any time delay during the flow where it was exactly at what page My sales application pages look like this - Landing page - /products select installation type - /installation direct-debit page - /direct-debit credit check page - /credit-check review basket - /review successful checkout - /checkout/success I have this base query where i was trying to attempt something but don't know where to start. Let me know if anyone can guide me on this. It will be highly appreciated.   index=myapp_prod sourcetype=ssl_access_combined Method=GET requested_content="/products" OR requested_content="/installation" OR requested_content="/direct-debit" OR requested_content="/credit-check" OR requested_content="/review" OR requested_content="/checkout/success" | stats count(eval(requested_content LIKE "/products")) as "landing-page"   . 
Related to recommendation as per following link  Setup load balancing  New versions of SPLUNK now fully support NLB. Splunkcloud is also behind NLB. <on-prem fwd> ===>NLB===>splunk cloud How ... See more...
Related to recommendation as per following link  Setup load balancing  New versions of SPLUNK now fully support NLB. Splunkcloud is also behind NLB. <on-prem fwd> ===>NLB===>splunk cloud How to setup? https://www.linkedin.com/posts/harendra-rawat-b10b41_asynchronous-forwarding-with-nlb-activity-7112204069363933185-SYRv/
After installing the universal forwarder on the forwarding host using windows, the splunk enterprise has been able to detect it but it is not sending any logs. Has 0 deployed apps. Please I need help... See more...
After installing the universal forwarder on the forwarding host using windows, the splunk enterprise has been able to detect it but it is not sending any logs. Has 0 deployed apps. Please I need help on how to get it to start forwarding logs. Thanks.
I have tried parsing through the documentation about the Splunk Add-on for Windows and I think I may confusing topics. Is this add-on supposed to be able to reach out to grab data or is this add-on o... See more...
I have tried parsing through the documentation about the Splunk Add-on for Windows and I think I may confusing topics. Is this add-on supposed to be able to reach out to grab data or is this add-on only supposed to help handle the data once received? I see it talks about using wfm instead of a its normal method, but I suppose I am confused about the normal method. Do you have to use the universal forwarder with it? Any help is appreciated!
This was hard to describe in a title! The host field for some indexed events includes the full FQDN, while on others only the hostname populates the host field. Examples:   host=server1.acme.com h... See more...
This was hard to describe in a title! The host field for some indexed events includes the full FQDN, while on others only the hostname populates the host field. Examples:   host=server1.acme.com host=server2 host=server3.local   I want to remove the FDQN portion, leaving only the hostname, like so:   host=server1 host=server2 host=server3   I am able to do this with the following:   rex field=host "(?<hostname>\w+)\."     However, this only returns data for the host fields that contain a period (".") - any host fields that contain only the host name are not returned. Using the sample data from above, the following search: index="myindex" | rex field=host "(?<hostname>\w+)\." | table hostname returns the following results: server1 server3   How can I strip the FQDN data from the host field only for the fields for which that is applicable?
Hello all! I am fairly new to SPLUNK but I wanted to make a chart that would use the X axis for a specified amount of time (span=3y), the Y axis for a list of users, and the plotted data points for ... See more...
Hello all! I am fairly new to SPLUNK but I wanted to make a chart that would use the X axis for a specified amount of time (span=3y), the Y axis for a list of users, and the plotted data points for due dates unique to that user. Below is a crude example of what I am trying to do...  I've been experimenting with the "timechart" command but I can't seem to figure out how to change the Y axis (if that is even possible). Is there a better way to approach this? Any help would be greatly appreciated!
Hi All, In my environment we have 6 indexers and one searchead which all are running server 2012.  We are running out of space on the physical indexers which are limited by physical drives. I have a... See more...
Hi All, In my environment we have 6 indexers and one searchead which all are running server 2012.  We are running out of space on the physical indexers which are limited by physical drives. I have a san connection to all of the indexers. How would i move just the coldDBs to a separate drive? Leaving warm to stay on the original drives? eg.  I have [Linux] homePath = volume:hot\Linux\db coldPath = volume:cold\Linux\colddb thawedPath = $SPLUNK_DB\Linux\thaweddb tstatsHomePath = volume:tstatsHomePath\Linux\datamodel_summary maxTotalDataSizeMB = 7500000 frozenTimePeriodInSecs = 31536000   I would like coldPath = F:\Linux\Colddb and move the current coldDBs over to the new drive and be searchable.  How would I accomp
New to splunk, come from a solarwinds monitoring environment, anyone have any suggestions on how to best transition to splunk, any recommended intro books, videos etc... to get me using the search fe... See more...
New to splunk, come from a solarwinds monitoring environment, anyone have any suggestions on how to best transition to splunk, any recommended intro books, videos etc... to get me using the search feature to monitor effectively?   Thank you all in advance, any help is greatly appreciated.
This is my query Sample: index=X service_name=XY request_host=XYZ  | rex field=_raw "FId=(?<fi>\d+)" | rex field=request_route "^(?<route>.*)\?" | rex field=_id "^(?<route>.*)\?" | eval eTime = tota... See more...
This is my query Sample: index=X service_name=XY request_host=XYZ  | rex field=_raw "FId=(?<fi>\d+)" | rex field=request_route "^(?<route>.*)\?" | rex field=_id "^(?<route>.*)\?" | eval eTime = total_time | lookup FI_Name-ICA.csv ICA AS fi OUTPUT FI as fi | stats count(total_time) as TotalCalls, max(eTime) AS MaxTime, avg(eTime) as AvgTime, min(eTime) as MinTime,p90(total_time) as P90Time,p95(total_time) as P95Time by fi route | sort route, -count | table fi, route, TotalCalls,MaxTime,MinTime,P90Time,P95Time,AvgTime | sort by fi I am trying to add columns for calls that took between 0 to 3 seconds 3 to 5 and > 8 seconds ???
I'm fairly new to splunk so please bare with me. I have a logfile that has multiple lines of data. However when I do my search I get mixed results. Here is an example logfile.  Crashed Jobs for Thu... See more...
I'm fairly new to splunk so please bare with me. I have a logfile that has multiple lines of data. However when I do my search I get mixed results. Here is an example logfile.  Crashed Jobs for Thu Dec 10 12:05:01 EST 2020 in qa environment Job started @ 20201210120501 CustomerHistoryLoad_fixLoad_FileFix_PART call_SPBatchDetail_Web.Job_BatchDetailStartWebDeptRequirements EmployeeMasterPull get_ControlState_StoreCloseMonitor.Job_GetControlState_StrClsMon RunSeqBusinessEODLoad run_CustomerLoadSeq run_SalesLease_LoadSeq run_Vendor_CDP_DW_LoadExportSeq run_Vendor_POSLog_ExportSeq_Adhoc_Run run_WebApr_LoadSeq run_WebDeptRequirements_LoadSeq Seq_HRMS_AD_to_DW StoreCloseMonitorSeq Job ended @ 20201210121407 Here is my search -  index=bli_datastage_crash_jobs_qa sourcetype=bli_datastage_crash_jobs | rex field=_raw "From:(?<Crashed>.*) To:(?<Job>.*)"  The problem is I get multiple events instead of just one event. I suspect I have breaks (newlines) in this logfile but I can't seem to get all the lines included into a single event. Appears the data is getting indexed as separate events.  Any advice on getting the data indexed as a single event would be greatly appreciated. 
I know how to use eval and if statements to pull fields that contain a %.value.% but how can I use this when running a search | lookup and output fields that contain a value of a field within the sea... See more...
I know how to use eval and if statements to pull fields that contain a %.value.% but how can I use this when running a search | lookup and output fields that contain a value of a field within the search?  Let me know if you need an example search or more context.  Thanks to anyone that can help me with this.
hello,   I am trying to dedup events from successful authorizations in Splunk. Currently, our windows systems make about 4 events per authorization but we only want to see one. I would like to dedu... See more...
hello,   I am trying to dedup events from successful authorizations in Splunk. Currently, our windows systems make about 4 events per authorization but we only want to see one. I would like to dedup based on time,  0.5 seconds for each event. Here is my current search:  | tstats summariesonly=true allow_old_summaries=true count from datamodel=Authentication.Authentication where Authentication.user=* (Authentication.src=* OR Authentication.dest=*) Authentication.action=failure by Authentication.user, Authentication.src, Authentication.dest | rename "Authentication.*" as "*" | eval source&destination=mvappend(src,dest) | eventstats dc(source&destination) AS host_count by user | where host_count >= 1 | sort - host_count | table source&destination, user | head 250   How can i add a dedup by time here? Thanks!
Hey guys,   I had a quick question that I am unable to get an answer for by googling/doc'ing. If I am wanting to tag a server by using /etc/system/local/inputs.conf, am I able to apply this _meta f... See more...
Hey guys,   I had a quick question that I am unable to get an answer for by googling/doc'ing. If I am wanting to tag a server by using /etc/system/local/inputs.conf, am I able to apply this _meta field to various sources as opposed to only sourcetypes?   For example, we are needing to give specific tags to each respective log file location on a server. One location, will have _meta = altci::examplea and the other will have _meta = altci::exampleb.   It would look like this:   [default] hostname = $decideOnStartUp   [F:\logFiles\inetpub\*.log] _meta = altci:examplea   [C:\logFiles\inetpub\*.log] _meta = altci:exampleb   Reason being is a few of our servers are legacy and host numerous sites/apps. Blah. I know it sucks, but it's a few old servers that are pretty important!   Thanks!!