All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

HI from the below query i will get calls count whose responseTime is morethan 10000 milliseconds index="ab_cs" host="aw-lx0456.vint.ent" source="cb-ss-service"AND((RequestedURL ="/man/*/details" ... See more...
HI from the below query i will get calls count whose responseTime is morethan 10000 milliseconds index="ab_cs" host="aw-lx0456.vint.ent" source="cb-ss-service"AND((RequestedURL ="/man/*/details" OR REQUESTED_URL="/man/*/contacts") OR (RequestedURL ="/contacts /*/details" OR REQUESTED_URL="/contacts/*/members"))AND (ResponseStatus OR HttpStatusCode)|sort -1 Timetaken | eval TimeTaken3=trim(replace(Timetaken, ",",""))| where TimeTaken3 >=10000 |stats count as ResponseOver10Sec But here I want to send an alert when count of ResponseOver10Sec is more than 2% of total transaction could you please suggest appropriate query??
REX command to create a field domain from website EX:  input : https://www.youtube.com/sd/td/gs-intro         output: www.youtube.com
Hello! We want to integrate McAfee ePO into a Splunk Cloud, but we only found tutorials on syslogging data. I've been looking and I don't think it's possible to syslog in to Splunk Cloud. How can w... See more...
Hello! We want to integrate McAfee ePO into a Splunk Cloud, but we only found tutorials on syslogging data. I've been looking and I don't think it's possible to syslog in to Splunk Cloud. How can we do it?   Thanks!
I want to send an alert when 2% of total transaction calls taking more than 10000 milliseconds could anyone please sugget appropriate query??
Splunk's VisualizationTrellis documentation page shows example searches for things like count by sourcetype, and later shows trellis-ed visualizations for multi-value items, but there are no example ... See more...
Splunk's VisualizationTrellis documentation page shows example searches for things like count by sourcetype, and later shows trellis-ed visualizations for multi-value items, but there are no example searches for them.  My data looks like this... {    audit: {      audit_enabled: Compliant,        control_access: NotCompliant,      firewall_on: NotCompliant,       etc: ...     } } I can create separate searches for each item in audit {} like this... source=device_audit  | stats count by audit.audit_enabled But there are many audit items. I'd like to trellis pie charts for each audit item without creating a separate search for each.  Is there are search I can use to trellis to produce three pie charts to show the split between compliant and notCompliant for each of the audit items (audit_enabled/control_access/firewall_on)? Thank you. 
Hi there, I'm sitting here trying to make sense of the different search types in Splunk (i.e. Dense, Sparse, Super-sparse, and Rare), how they affect performance and why that is.  I get that a ... See more...
Hi there, I'm sitting here trying to make sense of the different search types in Splunk (i.e. Dense, Sparse, Super-sparse, and Rare), how they affect performance and why that is.  I get that a Dense search, when you e.g. are searching for literally any index, then there is no point in utilising bloom filters because there is no need to rule out buckets to find specific events. However,  why isn't it beneficial for Sparse and Super-sparse searches to make use of Bloom filters?
Hi, I am trying to pull a data from a csv through deployment app but only the field names are getting indexed , data is not getting indexed. Number of records in csv is around 70000. Tried through ... See more...
Hi, I am trying to pull a data from a csv through deployment app but only the field names are getting indexed , data is not getting indexed. Number of records in csv is around 70000. Tried through dbconnect but from there also same issue. Is there any limit of data which can be indexed at a time. If yes, where can that be verified? Thanks  
  We have logs coming in from one of the source in CEF format. How to deal CEF Format data parsing in Splunk so that it get auto converted in field value pair. Post that i could alias those ... See more...
  We have logs coming in from one of the source in CEF format. How to deal CEF Format data parsing in Splunk so that it get auto converted in field value pair. Post that i could alias those fields basis on my datamodel need. Kindly suggest. Thanks in advance  
Hello, I am trying to track failed logons followed by a successful one using the transaction command and the following criteria: Limit the time span to 5 min,  add a startswith so each transaction ... See more...
Hello, I am trying to track failed logons followed by a successful one using the transaction command and the following criteria: Limit the time span to 5 min,  add a startswith so each transaction will begin with a logon failure, add an endswith so each transaction will end with logon success and add a | where to find when the eventcount exceeds 3 this is what I have so far  
Hi all, I have this need, compare a field with a series of error codes. I would not like to write in the search, any error codes, but I would like to use a lookup table. I then entered the error code... See more...
Hi all, I have this need, compare a field with a series of error codes. I would not like to write in the search, any error codes, but I would like to use a lookup table. I then entered the error codes in a column (Name = Errors) of the table, but when i  perform the search, they are not compared correctly. In the column, for example, is present: login.error.1004 In the search: tag = Log | lookup ServiziApp.csv ServiceName AS Service | search Functionality = "Access" errorCode! = Errors But the lines despite having a field = login.error.1004, are displayed. Checking the extracted fields, the errorCode field contains login.error.1004 and the Errors field also contains login.error.1004. Thanks in advance
I've a sub search on an SMTP log to get all TO and FROM values together with the status. Unfortunately TO and FROM are in one log entry and TO and STATUS in a different one. Common field is the TextI... See more...
I've a sub search on an SMTP log to get all TO and FROM values together with the status. Unfortunately TO and FROM are in one log entry and TO and STATUS in a different one. Common field is the TextID.  Simplified the log structure looks like the following for a single TextID:   ... {"id":null,"log":{"text":"123A: to=<T@>, status=sent"}} {"id":null,"log":{"text":"123A: to=<T@>, status=deferred"}} {"id":null,"log":{"text":"123A: from=<F@> to=<T@> proto=ESMTP"}} ...   My current search:   index=A [ search index=A "to=<" | rex field=log.text "(?<TextID>\w+).*from=<(?<FROM>.*)> to=<(?<TO>.*)> " | dedup TextID | return 1000000 $TextID ] | rex field=log.text "(?<TextID>\w+).*to=<(?<TO>.*)>.*, status=(?<STATUS>.*\))" | table TextID TO STATUS   My current result: TextID TO STATUS 123A To1 sent 123A To1 deferred 234B To2 sent 234B To2 delayed 345C To3 sent   How can I also print out the FROM which is only available in the sub search in the result set of the main search? I already tried to resolve this with union, join, append, appendcols but was unable to get expected result. Expected result would be: TextID TO  STATUS FROM 123A To1 sent From1 123A To1 deferred From1 234B To2 sent From2 234B To2 delayed From2 345C To3 sent From1 Thank you Jörg
Due to a disaster the Cluster Master of my indexer cluster is gone. There is no way to recover its data and we do not have a backup of its configuration files.  The peers keeps working fine so far w... See more...
Due to a disaster the Cluster Master of my indexer cluster is gone. There is no way to recover its data and we do not have a backup of its configuration files.  The peers keeps working fine so far without the CM but we need to built a new instance from the scratch and configure it. How can I do that?
I have many different machines that move around the country (USA), each with its own GPS lat and long coordinates. I'd like to be able to show the last known location for each item on a map.  My c... See more...
I have many different machines that move around the country (USA), each with its own GPS lat and long coordinates. I'd like to be able to show the last known location for each item on a map.  My current search is as follows:     source=".../ops.log" | table host, _time, gps_latitude, gps_longitude | where gps_latitude > 0.0 AND gps_longitude < 0.0 | dedup host     (the first line looks into a .log file which has information for latitude and longitude, many times these values are inputted as 0.0 or null and therefore I need to filter those out before I add them to the map, hence the "where" command). This correctly gets all of the machines with their Gps coords and displays them on a table (albeit very slowly). Now I would like to translate that information onto a marker map where each marker represents a host with its last known GPS coordinates.  I recognize that I must probably use a cluster map for this and have tried to add the following line to generate the map. (this is placed as the second last line)     geostats latfield=gps_latitude, longfield=gps_longitude count by host |     Unfortunately, nothing happens when attempting to add this line.  I would very much appreciate any help or guidance I could get to help set me in the right direction.  Thank you for your time.
i forgot my username and passwords and now iam unable login. i dint even find the filepath $SPLUNK_HOME/etc/passwd) so please help me in login to the account  
the agent was reported successfully  until the following error happened ِِERROR NetVizAgentRequest - Fatal transport error while connecting to URL [http://127.0.0.1:3892/api/agentinfo?timestamp=0&ag... See more...
the agent was reported successfully  until the following error happened ِِERROR NetVizAgentRequest - Fatal transport error while connecting to URL [http://127.0.0.1:3892/api/agentinfo?timestamp=0&agentType=APP_AGENT&agentVersion=0.10.0]: org.apache.http.conn.HttpHostConnectException: Connect to 127.0.0.1:3892 [/127.0.0.1] failed: Connection refused (Connection refused)
Hello   Am a newbie and am looking to extract data from a sample set that looks like this (its ingested in JSON): {    level: info    log: uid="302650",  a_msg="HandlingStatus=Finished, Message=... See more...
Hello   Am a newbie and am looking to extract data from a sample set that looks like this (its ingested in JSON): {    level: info    log: uid="302650",  a_msg="HandlingStatus=Finished, Message=Changed,     log_type: containerlogs    stream: stdout }   I want to extract the uid data as well as the Message which is inside the a_msg. I have  rex field=log "uid=\"(?<uid>\d{1,}+)" which gives me the uid, but I am REALLY struggling with the Message, ideally I would like a table to be produced so from the above data it would look like UID, Message ------------------- 302650, PlanChanged   I am reading up on Rex and Reg Ex etc, but this particular request requires a quick turnaround and i am really struggling.  Any help would be appreciated.   Many thanks
Hi Is it possible to integrate Splunk Enterprise on-premise with SignalFx or it just work with Splunk ITSI (APM)?   Any idea? Thanks,
Hey guys! This is my first question here, so I'm sorry if I'm not being clear. I want to enrich the data we have and add a few fields with data that I receive from an external API. For this, I wan... See more...
Hey guys! This is my first question here, so I'm sorry if I'm not being clear. I want to enrich the data we have and add a few fields with data that I receive from an external API. For this, I want to create a custom command to receive a field name and add run a python code to send requests to the API with the field values and create new fields with the additional data for each row. I have no experience with creating new commands with python, so I'd much appreciate an explanation how to do it (or if you have a better idea how to implement this) and some examples to rely on. Thanks!
I opened report acceleration for a report.The acceleration summary build well when user role has no Search filter restrictions. But as long I add any search filter restrictions for the role,the acce... See more...
I opened report acceleration for a report.The acceleration summary build well when user role has no Search filter restrictions. But as long I add any search filter restrictions for the role,the acceleration summary will never start to build. In page Report Acceleration summaries,the summary status shows that the progress is 0. Can anyone tell me why this happens?Any response will be appreciated
Hi dear splunk community, Can someone help me to convert/translate the following syslog-ng config to the corresponding rsyslog server side config please ? The standard syslog-ng.conf file simply in... See more...
Hi dear splunk community, Can someone help me to convert/translate the following syslog-ng config to the corresponding rsyslog server side config please ? The standard syslog-ng.conf file simply includes the statements below which are in a file in the conf.d dir like so: @include "/etc/syslog-ng/conf.d/*.conf" I'd really appreciate it.  It doesn't have to be perfect or exact or even completely converted, as long as most of it can be  translated...the main concerns being the audit logs and all the rest of the program logs... Thanks so very much,   source s_remote { syslog(port(514), transport(tcp), flags(), max-connections(100),log-fetch-limit(100),log_iw_size(20000)); }; destination d_kern { file("/var/log/syslog-to-splunk/$HOST/kernel.log" create-dirs(yes)); }; destination d_mail { file("/var/log/syslog-to-splunk/$HOST/mail.log" create-dirs(yes)); }; destination d_daemon { file("/var/log/syslog-to-splunk/$HOST/daemon.log" create-dirs(yes)); }; destination d_auth { file("/var/log/syslog-to-splunk/$HOST/auth.log" create-dirs(yes)); }; destination d_cron { file("/var/log/syslog-to-splunk/$HOST/cron.log" create-dirs(yes)); }; destination d_security { file("/var/log/syslog-to-splunk/$HOST/audit.log" create-dirs(yes)); }; # All else. destination d_rest { file("/var/log/syslog-to-splunk/$HOST/program/$PROGRAM.log" create-dirs(yes)); }; filter f_kern { facility(kern); }; filter f_mail { facility(mail); }; filter f_daemon { facility(daemon, user, syslog); }; filter f_auth { facility(auth, authpriv, security); }; filter f_cron { facility(cron); }; filter f_security { facility(kern, auth, authpriv, security, local7); }; filter f_rest { not facility(auth, authpriv, cron, kern, mail, user, security, syslog); }; log { source(s_remote); filter(f_kern); destination(d_kern); }; log { source(s_remote); filter(f_mail); destination(d_mail); }; log { source(s_remote); filter(f_daemon); destination(d_daemon); }; log { source(s_remote); filter(f_auth); destination(d_auth); }; log { source(s_remote); filter(f_cron); destination(d_cron); }; log { source(s_remote); filter(f_security); destination(d_security); }; log { source(s_remote); filter(f_rest); destination(d_rest); };