All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Dear Splunk community, I'm new to Splunk, so excuse my incompetence... What I'm trying to do is enriching my web access log with app name and team name from a csv lookup file. The CSV file "ing... See more...
Dear Splunk community, I'm new to Splunk, so excuse my incompetence... What I'm trying to do is enriching my web access log with app name and team name from a csv lookup file. The CSV file "ingress_map.csv" looks like this:   ingress,app,team https://mycompany.com/abc,foo-bar,a-team https://app.mycompany.com,good-app,b-team https://app.mycompany.com/abc,better-app,c-team https://app.mycompany.com/abc/xyz,best-app,d-team     The url field of my web access log will seldom match exactly one of the ingresses, is it possible to have a lookup that finds the best matching ingress and adds the fields app and team to the log line? Or is there a better way of solving this problem?   Regards Terje Gravvold
I have a bar chart stacked graph with time on X-axis and Success, failure count stacked on Y axis. when i click on the success count, it needs to display the table with success transaction details.... See more...
I have a bar chart stacked graph with time on X-axis and Success, failure count stacked on Y axis. when i click on the success count, it needs to display the table with success transaction details. same for failure count as well.  As of now i am passing the earliest and latest time from the bar chart with the below condition. <eval token="e">$click.value$</eval> <eval token="le">relative_time($click.value$, "+60m")</eval> I have 2 panel described as Show_Success and Show_failure. Can someone help me how to set the token to pass the value for the panel to show depends on the click for success or failure. 
How to convert Windows lastLogonTimestamp from this format 07:17.45 PM, Fri 09/30/2022 to 09/30/2022 19:17:45 Thank you    
        index=aws sourcetype="aws:metadata" InstanceId=i-* | spath Tags{}.key.Name output=Hostname | mvexpand Hostname | fieldsummary | search field = Hostname         The ab... See more...
        index=aws sourcetype="aws:metadata" InstanceId=i-* | spath Tags{}.key.Name output=Hostname | mvexpand Hostname | fieldsummary | search field = Hostname         The above search give me count of value instead the value itself.  What I am missing? Tag &  AmiLaunchIndex is at same level right?  Splunk extracts "Tags{}.Key"=Name, AmiLaunchIndex at the INTERESTING FIELDS.  I really want to learn spath.  I Know how to do with regex.  I read the documentation but, it doesn't make sense to me.     | spath Tags{5}.Key output=HN     Give me values at the key level but not at the Name values
I would like to detect successful authentication after a brute force attempt. It would be nice to see multiple status code 400s and the 200s all from the same IP. That way, I do not have to do multip... See more...
I would like to detect successful authentication after a brute force attempt. It would be nice to see multiple status code 400s and the 200s all from the same IP. That way, I do not have to do multiple searches for every IP. I used the below query but was unsuccessful. Please help if you can index=[index name] sourcetype=[sourcetypename] httpmethod=* status code=* | eventstats count(eval('action'=="success")) AS success, count(eval('action'=="failure")) AS failure BY src_ip | where total_success>=1 AND total_failure>=15 | stats count by src_ip In between I even added |strcat success . failure but could not get results. Kindly assist.  Thank you.
Dear community, I am new to Splunk DB and I am trying to understand a few things: Context: I am trying to use Splunk DB as an interface for my data stored in Hudi or HDFS or Cassandra. I want to gi... See more...
Dear community, I am new to Splunk DB and I am trying to understand a few things: Context: I am trying to use Splunk DB as an interface for my data stored in Hudi or HDFS or Cassandra. I want to give the Splunk DB interface which can query this data and returns this date to a Splunk environment. I have a few questions: - I read that it is recommended to install Splunk DB on the heavy forwarder. If we only have access to the research head, is it possible to install it on search heads? - in terms of indexing, is it required to have the Splunk indexing, or I can use the indexing of the other database? - overall, my use cases will use Slunk DB just as an interface. Thanks a lot  
I am trying to extract field from the "textPayload" value which is log message and it has "status" as key.  I want to make my search by extracting "status" as a field and apply for creating alerts.... See more...
I am trying to extract field from the "textPayload" value which is log message and it has "status" as key.  I want to make my search by extracting "status" as a field and apply for creating alerts.  Here is the regex i generated and working in regex101 >> \\"status\\":\\"(?<status>[^\"]+) Here is our sample log ================================================================================ {"insertId":"l9ple6wfkvbdfasfdsfdwyoo","labels":{"compute.googleapis.com/resource_name":"gke-default-node-poo-4e912bb9-vrl1","k8s-pod/app":"some-service,"k8s-pod/environment":"dev","k8s-pod/part-of":"some-service","k8s-pod/pod-template-hash":"79cb686fcf","k8s-pod/security_istio_io/tlsMode":"istio","k8s-pod/service_istio_io/canonical-name":"some-service","k8s-pod/service_istio_io/canonical-revision":"v1","k8s-pod/stage":"dev","k8s-pod/version":"v1"},"logName":"projects/abc-dev/logs/stdout","receiveTimestamp":"2022-09-30T15:00:05.2690572Z","resource":{"labels":{"cluster_name":"-gke-dev","container_name":"some-service-v1","location":"us-east4","namespace_name":"dev","pod_name":"some-service-v1-79cb686fcf-x2frb","project_id":"gke-dev"},"type":"k8s_container"},"severity":"INFO","textPayload":"2022-09-30 15:00:00.952 INFO 1 --- [nio-8080-exec-8] c.a.a.a.controller.BrokerController : {\"classification\" "NORMAL\",\"action\" "ALERT\",\"host\" "asome-service-v1-79cb686fcf-x2frb\",\"ipAddr\" "10.143.104.169\",\"status\" "SUCCESS\",\"time\" "2022-09-30T15:00:00.952Z\",\"msg\" "getToken - Start\"}","timestamp":"2022-09-30T15:00:00.95264915Z"}
Hello!   I'm relatively new to Splunk but I've worked with databases over the years so I felt like approaching this wasn't too bad.    The problem: in our situation, we have hosts that exist ... See more...
Hello!   I'm relatively new to Splunk but I've worked with databases over the years so I felt like approaching this wasn't too bad.    The problem: in our situation, we have hosts that exist under our own index for an application. However sometimes those hosts go down or stop reporting logs. That's a separate issue but it's something we want to detect and give the user/client insight into which hosts are up and which ones are down.   So here's what I have so far: ( I attempted a code sample here but it wasn't working )       | union     [ search index=unique_index host IN ($hosts$) source="<applicationPath>/http_logs/access_log.log"     | dedup host     | stats count by host     | rename host AS hostsFound     | fields hostsFound]     [ makeresults     | eval hosts=split("$hosts$", ",")] | eventstats values(hosts) as AllHosts | stats count(hostsFound) as Match dc(AllHosts) as MaxMatch values(hostsFound) as HostsFound values(AllHosts) as AllHosts | search Match < MaxMatch | mvexpand AllHosts | where !(AllHosts in (HostsFound)) | rename AllHosts as HostsMissing | eval hosts=mvappend(HostsFound,HostsMissing) | fields hosts,HostsMissing | mvexpand hosts | eval count = if(hosts in (HostsMissing), 0, 1) | table hosts, count | dedup hosts   "$hosts$" is a local variable we have on the dashboard for this query so when a list of hosts are selected, or just one host, then it'll populate there and run the query.   This is a bit of a combination of what I've read on these forums and what I can up with. In the end we're doing the initial query in the union to get what results we have our there for hosts that report back. It's just a tomcat access log. Then the other side of the union are all of the hosts we pass in. In our example we have 7 that report and one that does not, so a total of 8. This query in the experiences I've had will work if ONE of the hosts doesn't report, like explained above, however if all of the hosts report back then it won't return any results.   So a few questions What can I do to make it return all results if all hosts return data AND if only a few or none of them return data? Can this query be improved, and how?    I'm still learning how this system works but any insight would be fantastic.   Thank you!
ERROR HttpListener [97417 TcpChannelThread] - Exception while processing request from x.x.x.x:63596 for /en-US/splunkd/__raw/services/search/shelper?output_mode=json&snippet=true&snippetEmbedJS=false... See more...
ERROR HttpListener [97417 TcpChannelThread] - Exception while processing request from x.x.x.x:63596 for /en-US/splunkd/__raw/services/search/shelper?output_mode=json&snippet=true&snippetEmbedJS=false&namespace=search&search=search%20i&useTypeahead=true&showCommandHelp=true&showCommandHistory=true&showFieldInfo=false&_=1664562934323: std::bad_alloc Any help please     
Hi I got error message after upgrading splunk enterprise from version 8.1 to version 8.2.7 in all my splunk dashboard, it shows warning with message : cannot expand lookup field 'hostname' due to... See more...
Hi I got error message after upgrading splunk enterprise from version 8.1 to version 8.2.7 in all my splunk dashboard, it shows warning with message : cannot expand lookup field 'hostname' due to a reference cycle in the lookup configuration can you tell me how to fix this issue?   thank you
I can't use drill downs in Splunk mobile if the dashboard was created in Dashboard Studio. We try to create the same dashboard with the same panel in Dashboard Studio and a classic. In the Splunk... See more...
I can't use drill downs in Splunk mobile if the dashboard was created in Dashboard Studio. We try to create the same dashboard with the same panel in Dashboard Studio and a classic. In the Splunk app, only the one that was created as classic has the drill down function. In the documentation I do not find any reference to the use of this function through these apps.
[| makeresults | addinfo | eval earliest=relative_time(info_min_time,"@d+7h") | eval latest=relative_time(info_min_time,"@d+31h") | fields earliest latest]| fields file_name batch_count entry_add... See more...
[| makeresults | addinfo | eval earliest=relative_time(info_min_time,"@d+7h") | eval latest=relative_time(info_min_time,"@d+31h") | fields earliest latest]| fields file_name batch_count entry_addenda_count total_debit_amount total_credit_amount |dedup file_name | eval total_debit_amount=total_debit_amount/100, total_credit_amount=total_credit_amount/100 | table _time file_name batch_count entry_addenda_count total_debit_amount total_credit_amount I am using above query But want to show 2 different time zone PST and UTC in the table. Right now the time shown is in UTC  
Hi Folks,  I could use some help with this query.   index=address_index earliest=-30m address [ search index=registration_index earliest=-30m | `get_ip_location(src_ip)` | rename user as email... See more...
Hi Folks,  I could use some help with this query.   index=address_index earliest=-30m address [ search index=registration_index earliest=-30m | `get_ip_location(src_ip)` | rename user as email | dedup email | table email src_ip ip_location | return 15 $email] | rex field=_raw "REGEX xmlfield" | xmlkv xmlfield | eval email=lower(trim(EMAIL_ADDRESS)) | eval city=lower(trim(CITY)) | eval address=lower(trim(ADDRESS1)) | eval state=lower(trim(STATE)) | stats values(city) as city values(state) as state values(address) as address by email     The inner search looks for all the registrations for the past 30 mins. Then, the return command passes the email to the outer search, which then queries the address index for an address on file according to the email. my goal, right now, is to pass 2 parameters to the outer search, an email and the src_ip/ip_location. problem: when I attempt to add a second parameter to the return command, in addition to email, the query no longer works. The ultimate goal is to build a search that queries registrations from met online, use the get_ip_location on the originating IP address, then compare that ip_location with their address on file (which is usually in the address index). However, when I try to following query, I get no results:   index=address_index earliest=-30m address [ search index=registration_index earliest=-30m | `get_ip_location(src_ip)` | rename user as email | dedup email | table email src_ip ip_location | return 15 $email $ip_location] | rex field=_raw "REGEX xmlfield" | xmlkv xmlfield | eval email=lower(trim(EMAIL_ADDRESS)) | eval city=lower(trim(CITY)) | eval address=lower(trim(ADDRESS1)) | eval state=lower(trim(STATE)) | stats values(city) as city values(state) as state values(address) as address by email ip_location   How can I pass these 2 values, $email and $ip_location, to the outer search?
Hi All, we are facing a issue in Splunk Add-on for Microsoft Cloud Services event hub input, there are multiple inputs we have created and almost all the inputs are collecting partial logs. We are c... See more...
Hi All, we are facing a issue in Splunk Add-on for Microsoft Cloud Services event hub input, there are multiple inputs we have created and almost all the inputs are collecting partial logs. We are checking the count of event at Azure Log Analytics workspace and at the same time checking events on Splunk there is random difference in event collection. There are no errors in internal logs, although we can see some warning messages, we tried increasing the ingestion pipeline to 4. tried disabling all the inputs but kept only one to check if that's making any issue.  Splunk deployment is single instance test environment where  32vCPU and 64 GB memory is assigned, storage is more than 800 IOPS. Not much of the application are installed.  Splunk support case is also opened but till now they haven't able to find any root cause. Need suggestions and inputs if someone else has faced such issue. Little back ground on architecture, we have multiple data sources (Azure Activity & AD) sending logs to one event hub and we are segregating the sourcetypes in splunk by transforming data based on category and resourceId.  Please help to resolve this issue.    Thanks Bhaskar  
Hello, I've been asked to create a report that will show the number of events from the 2 previous quarters by country, the monthly average, and the quarterly percent increase: Country Q1'22... See more...
Hello, I've been asked to create a report that will show the number of events from the 2 previous quarters by country, the monthly average, and the quarterly percent increase: Country Q1'22 Total Q1'22 Monthly Avg Q2'22 Total Q2'22 Monthly Avg Q2'22 Percent Increase US 300000 100000 330000 110000 10% UK 60000 20000 61000 20333 2% Canada 1200 400 1500 500 25%   Using this:     index=mydata earliest=-2q@q latest=-q@q | chart dc(ID) as count_earlier by Country | appendcols [ search index=mydata earliest=-q@q latest=@q | chart dc(ID) as count_later by Country] | eval ave_earlier=round(count_earlier/3,0) | eval ave_later=round(count_later/3,0) | eval DiffPer=round(((count_later - count_earlier) / count_earlier) * 100,0)."%" | table ReportersCountry,count_earlier,ave_earlier,count_later,ave_later,DiffPer     Now I'm trying to rename count_earlier, ave_earlier, count_later, and ave_later to be the quarter labels. I've been using:     | convert TIMEFORMAT="%m" ctime(_time) AS month | rex field=date_year "\d{2}(?<short_year>\d{2})" | eval quarter=case(month<=3,"Q1",month<=6,"Q2",month<=9,"Q3",month<=12,"Q4",1=1,"missing")."'".short_year     And have been trying to use eval {} to rename the columns but haven't quite figured it out. I also tried using chart which allows me to get the quarter headers, but then I couldn't figure out how to calculate the percent difference column.  Thanks for any help in advance!
Hello, We are in process of upgrading our Splunk Infrastructure from ver 7.x to 8.x. Before migration, we are stuck in the functionality testing of Alert manager ver 3.08 on Splunk Enterprise 8.1.4... See more...
Hello, We are in process of upgrading our Splunk Infrastructure from ver 7.x to 8.x. Before migration, we are stuck in the functionality testing of Alert manager ver 3.08 on Splunk Enterprise 8.1.4 (Dev env). We are seeing that the alerts which gets triggered with Alert Manager ver 3.08, are not getting converted from "New" status to "auto_assigned".  We have checked the configurations and logs, we couldn't find anything which can give us the clarity on this issue. Kindly help us in this issue, if you can provide alternate way to find out the root cause of this.   Note: I am attaching logs from  the Dev environment where there is no log entry of  "Set status of incident <id> to auto_assigned".    
I'm trying to get a list of fields by sourcetype without going down the route of fieldsummary and thought analyzing the props configs would be a good place to start.  I'm starting with EVAL genera... See more...
I'm trying to get a list of fields by sourcetype without going down the route of fieldsummary and thought analyzing the props configs would be a good place to start.  I'm starting with EVAL generated fields but not having any luck on the foreach section. Any pointers would be much appreciated.   | rest splunk_server=local /servicesNS/-/-/configs/conf-props | table title EVAL-a* | eval eval_fields="" | foreach EVAL-* [ eval eval_fields=if(isnotnull(<<FIELD>>), mvappend(eval_fields,'<<MATCHSTR>>'), eval_fields) ] | table title eval_fields *  
Hi Team,   I wanted to count response time for each hours from application logs, wanted to create dashboard using line graph Please find below app logs {"TIMESTAMP":"2022-09-29 T11:31:49.038 ... See more...
Hi Team,   I wanted to count response time for each hours from application logs, wanted to create dashboard using line graph Please find below app logs {"TIMESTAMP":"2022-09-29 T11:31:49.038 GMT'Z","MESSAGE":"response=","LOGGER":"com.fedex.cds.ws.PerfInterceptor","THREAD":"http-nio-8080-exec-2089","LOG_LEVEL":"DEBUG","DataCenter":"1","EndUserId":"APP943415","Stanza":"etnmsMasterSubRangeStanza","ResponseTime":"268","Operation":"queryByIndex","Domain":"etnms","EAI":"APP943415","TransactionId":"ecd29878-e4f9-48db-ab29-a7fa98ba6be7","EAI_NAME":"cds","EAI_NBR":"APP943415"}
Hi there, I am new to this kind of analysis within Splunk but i've been asked to create a filter on events where the closed date is before the start date. This is the search I have created but ca... See more...
Hi there, I am new to this kind of analysis within Splunk but i've been asked to create a filter on events where the closed date is before the start date. This is the search I have created but can't get it working: index=main sourcetype="CRA_Consumer_Txt_data" | eval close_date=strftime(strptime(close_date,"%d%m%Y"),"%d/%m/%Y") | eval start_date=strftime(strptime(start_date,"%d%m%Y"),"%d/%m/%Y") | search close_date < start_date | table start_date, close_date   This is an example of what even is shown when i run that search   start_date        close_date 30/04/2021        23/05/2021