All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

i have logs like this : -rw-r----- 1 jira jira 4921534 Apr 13 22:42 catalina.2020-04-13.log -rw-r----- 1 jira jira 463769261 Apr 14 00:00 access_log.2020-04-13 -rw-r----- 1 jira jira 28... See more...
i have logs like this : -rw-r----- 1 jira jira 4921534 Apr 13 22:42 catalina.2020-04-13.log -rw-r----- 1 jira jira 463769261 Apr 14 00:00 access_log.2020-04-13 -rw-r----- 1 jira jira 2840014 Apr 14 13:08 catalina.2020-04-14.log -rw-r----- 1 jira jira 222675515 Apr 14 13:08 access_log.2020-04-14 How to configure inputs.conf for the access_log I tried below but it didnot work [monitor:////apps/logs/access_log] index = prdidx blacklist = .(gz)$ sourcetype = ACCESS _TCP_ROUTING = WEB
I have a lookup that recently stopped auto extracting fields. What I've noticed is that if I do a join, I can join if in the subsearch I specifically search for that row, but doing the normal lookup... See more...
I have a lookup that recently stopped auto extracting fields. What I've noticed is that if I do a join, I can join if in the subsearch I specifically search for that row, but doing the normal lookup command gives me nothing. For example something like: index=a sourcetype=a host=host1 | lookup host_lookup host as host output fieldA Does not give me fieldA value for host1, however if I do: index=a sourcetype=a host=host1 | join host [|inputlookup host_lookup | table host fieldA| search host=host1] I get fieldA just fine in that case. So clearly it would appear to me some sort of limit is getting hit, even though I don't seem to be seeing any indication in the ui or Job inspection stating me that I am hitting a limit. Does anyone know if this is indeed a limit I'm hitting? Or is there anything else I can look into?
I have a line count chart where only the number of cases is shown for each ID number, and want to connect it to another panel that would have more details for the specific ID number. Here is an examp... See more...
I have a line count chart where only the number of cases is shown for each ID number, and want to connect it to another panel that would have more details for the specific ID number. Here is an example of the search for my line chart: index=ccp sourcetype=ves_jep | stats dc(CODE) by ID and an example of more detailed search: index=ccp sourcetype=ves_jep | stats dc(JEOPARDY_CODE) by CCRID, REQUIREDSERVICEDATE,DATEAPPROVED, GLOBALTRACKINGSTATUS, JEOPARDY_CAUSE I would like for the user to click on one of the IDs on the line chart and that would lead to the second panel with details. Thank you in advance!
We've started using the Splunk Connect for Kubernetes. We have a heavy forwarder setup that hosts the HEC endpoint, and forwards those logs onto the indexers. We had an issue where the indexers went ... See more...
We've started using the Splunk Connect for Kubernetes. We have a heavy forwarder setup that hosts the HEC endpoint, and forwards those logs onto the indexers. We had an issue where the indexers went down, no indication that the HEC endpoint was down though. During this time our logs from these K8 clusters are missing. Other systems that use the normal universal forwarder ended up populating the logs during down time, however the logs from those clusters did not. I'm not familiar with K8 and wasn't involved in the deployment of the Splunk Connect pieces. I'm assuming that when using an HEC, if the logs don't make it into splunk, they aren't retried or cached anywhere? Or is that some sort of setting we may have missed? Thanks for looking, Jeremiah
Has anyone used Splunk Enterprise to effectively detect Pass The Ticket related attacks? If so I would be curious as to how you did it. Thanks!
Hi, I am trying to send events in a specific index, regardless of sourcetype, to the Diode Receiver Add-On but cannot really get it to work. Setting up the add-on using [default] stanza in prop... See more...
Hi, I am trying to send events in a specific index, regardless of sourcetype, to the Diode Receiver Add-On but cannot really get it to work. Setting up the add-on using [default] stanza in props.conf matches events in ALL indexes including _internal and that really makes a mess of the main index in the receiver. I tried using the CEF Add-On but I'm not sure how to configure the routing for cefout. Can it even be configured to send UDP? This is all done in a test environment without a hardware diode for easy troubleshooting but the goal is to set up two splunk servers separated by a UDP-only-diode and have the main index in both servers contain the same information.
Please i want to learn search processing language, is there some of video tutorial in?
Hi All I was wondering if any kind soul could point me in the right direction with this. I recently put together a jazzy looking dashboard using the Status Indicator Visualization, unfortunatel... See more...
Hi All I was wondering if any kind soul could point me in the right direction with this. I recently put together a jazzy looking dashboard using the Status Indicator Visualization, unfortunately though, it has no built-in drill down capabilities. So the hack others on this forum use is open a pop-up when the panel is clicked using JS. I've got this bit working, for the life of me, I can't figure out how to get the filter values/tokens from the current dashboard to pass onto the pop-up URL. I've hacked together the following script using examples on the forum and the Splunk dev docs: // Components to require var components = [ "splunkjs/ready!", "splunkjs/mvc/simplexml/ready!", "jquery" ]; // Require the components require(components, function( mvc, ignored, $ ) { $('#something').click(function() { // Get the default model var defaultTokenModel = splunkjs.mvc.Components.getInstance("default"); // Get some token from there var time_token = defaultTokenModel.get("time_field"); // Other method from doc //var tokens = mvc.Components.get("default"); //var time_token = tokens.get("time_field"); window.open( 'drilldown_report?earliest=' + time_token.earliest, '_blank' // <- This is what makes it open in a new window. ); }); }); I tried 2 methods of getting the Time token, using "Components.getInstance()" and "Components.get()", in addition tried looking for the token name of "time_field" and "form.time_field", the actual parent-dashboard URL is: https://splunk.myhome.uk:8000/en-GB/app/search/my-core/edit?form.time_field.earliest=-24h%40h&form.time_field.latest=now but all I ever get back is VM3277:30 Uncaught TypeError: Cannot read property '**earliest**' of undefined at HTMLDivElement.eval (eval at globalEval (common.js:1003), <anonymous>:30:61) at HTMLDivElement.dispatch (common.js:1014) at HTMLDivElement.elemData.handle (common.js:1014) eval @ VM3277:30 dispatch @ common.js:1014 elemData.handle @ common.js:1014
i need a query for all active and inactive users which are in Splunk ES with out using "reset" key
Hopefully I can explain this clearly. I'm trying to create a "what-if" dashboard. I'm trying to model moving a workload from one device to another. So the user could select a source device (dropdo... See more...
Hopefully I can explain this clearly. I'm trying to create a "what-if" dashboard. I'm trying to model moving a workload from one device to another. So the user could select a source device (dropdown), a destination device (second dropdown) and the workloads they want to move off the source device (multiselect dropdown). So just using IOPs as the metric, I'd like to display a graph that shows one line for current IOPs on the destination device along with a line that adds the IOPS from what's selected in the multiselect dropdown. This has proven to be quite difficult. I've tried writing a single query to gather all of this to no avail. There has to be a way to do this but I sure can't think of one. Any ideas?
Hello Everyone. The following query is providing me what I need for PANs (each pillar is representing . However, I need to change the following query to get all (four PANS) in their own separate p... See more...
Hello Everyone. The following query is providing me what I need for PANs (each pillar is representing . However, I need to change the following query to get all (four PANS) in their own separate pillar, representing for last 8 days with each Pillar representing all for pans for each day. index=pa* sourcetype=pan:threat (action=dropped OR action=blocked) src_ip!=10.* threat_id=* | stats count by dvc_name | sort count desc Any assistance you can provide in that regard will be greatly appreciated.
I have a form dashboard that has several input fields. How do I only use a token if it has a value? Mean I may want to leave the input blank and not pass anything to the search. So the input is optio... See more...
I have a form dashboard that has several input fields. How do I only use a token if it has a value? Mean I may want to leave the input blank and not pass anything to the search. So the input is optional. The reason is my results don't always include some fields in every log. So if pass * as the default and it ends up being field=* this will exclude logs which don't have the field present. How do I get around this? I only want my search to become field=$input$ if the user actually filled out the input. Otherwise I want the entire line removed.
I want to write a query to take the count if two non-consecutive string occurs in a statement. I am trying to do something like this, but this is not able to take logical AND operator in the match me... See more...
I want to write a query to take the count if two non-consecutive string occurs in a statement. I am trying to do something like this, but this is not able to take logical AND operator in the match method : Note : I want to use the query using eval only as in my larger query I have to perform some mathematical operation using more (different) eval variables. | eval concatsearch=if(match(_raw,"String1 && String2"),1,0) | eval ccount=if(match(_raw,"cc"),1,0) | timechart span=1h sum(concatsearch) as concatsearch, sum(ccount) as ccount
Ok, so I a trying my best to evaluate the differences between two search results. Search 1 gives me a list of "vm_name" index="1" sourcetype="1" source="1" | search state="running" | table vm_... See more...
Ok, so I a trying my best to evaluate the differences between two search results. Search 1 gives me a list of "vm_name" index="1" sourcetype="1" source="1" | search state="running" | table vm_name Search 2 gives me a list of "hostname" index="2" source="2*" group=tcpin_connections | dedup hostname | table hostname Each search is crafted from two different indexes and sourcetypes. Both of these lists share common field values. For example, in search 1 vm_name can be named "MYPC" and on search 2 hostname is also "MYPC". Both are named MYPC and in reality, they are one and the same. However, I need to create a list that will essentially compare the values of both searches and if they match subtract them from one another and create the NEW list. The goal to remove MATCHED results from both searches to create a "DELTA's" list. I have tried the "join" command but when I do the results from the second search results are completely messed up. I tried created lookups and added them to one search but I have the same problem. The only thing I can think of is maybe the issue is the search itself may yield metadata somewhere that screws up the results. For example, on search 2 I need to add "dedup hostname" to the search to retrieve an accurate list.
Hello, we are planning to upgrade our splunk platform from version 6.4 to version 7.3. We implemented a server with version 7.3 and detected a change in the syntax of our search queries. we want to... See more...
Hello, we are planning to upgrade our splunk platform from version 6.4 to version 7.3. We implemented a server with version 7.3 and detected a change in the syntax of our search queries. we want to know if there is any mechanism to change the syntax automatically thanks
Through /servicesNS/nobody/MYAPP/admin/nav/default I can access the navigation bar content of my app. I now want to be able to change this via a setup page. Displaying eai:data works fine, howeve... See more...
Through /servicesNS/nobody/MYAPP/admin/nav/default I can access the navigation bar content of my app. I now want to be able to change this via a setup page. Displaying eai:data works fine, however when trying to save on the setup page, I get an error saying "Cannot find item for POST arg_name="/admin/nav/default/eai%3Adata"" My setup.xml: <setup> <block title="Nav" endpoint="admin/nav" entity="default"> <input field="eai:data"> <label>$name$</label> <type>text</type> </input> </block> </setup>
Hello, I try to set up Splunk Add-On for ServiceNow. I did needful in Splunk and now when I try to do activate integration in SNow, I get following error: When I check the logs, there's an... See more...
Hello, I try to set up Splunk Add-On for ServiceNow. I did needful in Splunk and now when I try to do activate integration in SNow, I get following error: When I check the logs, there's an info that HTTP status code is 404 or 0 except 200. Do you have any idea what could I did wrong here? Thank you in advance for your help. Regards, Ewelina
Hi, Need help in extracting the values from the below mentioned tags divisionID - Value: ABC.202 accountNumber Value: 111122222 accountStatus ... See more...
Hi, Need help in extracting the values from the below mentioned tags divisionID - Value: ABC.202 accountNumber Value: 111122222 accountStatus Value: Active ppvCreditLimit Value: 0.00 ppvRemainingCreditLimit Value: 0.00 ":"{ \n\"getAccountResponse\" : {\n \"account\" : {\n \"divisionID\" : \"ABC.202\",\n \"accountNumber\" : \"111122222\",\n \"customerNumber\" : \"19118902\",\n \"locationNumber\" : \"191189\",\n \"billingStationLevel0Code\" : \"202\",\n \"billingStationLevel1Code\" : \"50\",\n \"billingStationLevel2Code\" : \"05\",\n \"sourceFTACode\" : \"005\",\n \"accountType\" : {\n \"billerCode\" : \"R\",\n \"enterpriseCode\" : \"RESIDENTIAL\",\n \"description\" : \"Residential\",\n \"longDescription\" : null\n },\n \"accountStatus\" : \"Active\",\n \"billerAccountStatus\" : \"A\",\n \"connectDate\" : \"2019-10-31\",\n \"classification\" : \"SFU\",\n \"name\" : {\n \"last\" : \"raft\",\n \"first\" : \"xxx\"\n },\n \"serviceAddress\" : {\n \"line1\" : \" SAGE ST\",\n \"city\" : \"COL\",\n \"state\" : \"SC\",\n \"postalCode\" : \"211051132\"\n },\n \"phone\" : [ {\n \"number\" : \"9999999999\",\n \"type\" : \"Home\"\n } ],\n \"lineOfBusinessDetail\" : [ {\n \"type\" : {\n \"billerCode\" : \"C\",\n \"enterpriseCode\" : \"VIDEO\",\n \"description\" : \"Video\",\n \"longDescription\" : \"Video\"\n },\n \"status\" : {\n \"billerCode\" : \"1\",\n \"enterpriseCode\" : \"LOBCONNECTED\",\n \"description\" : \"LOB Connected\",\n \"longDescription\" : \"LOB Connected\"\n }\n }, {\n \"type\" : {\n \"billerCode\" : \"D\",\n \"enterpriseCode\" : \"HSD\",\n \"description\" : \"HSD\",\n \"longDescription\" : \"HSD\"\n },\n \"status\" : {\n \"billerCode\" : \"1\",\n \"enterpriseCode\" : \"LOBCONNECTED\",\n \"description\" : \"LOB Connected\",\n \"longDescription\" : \"LOB Connected\"\n }\n } ],\n \"experianPin\" : \"xxxxx140299\",\n \"accountDetail\" : {\n \"totalCurrentBalance\" : 212.31,\n \"totalPendingAmount\" : 0.00,\n \"totalLastPayment\" : -27.00,\n \"totalAmountDue\" : 312.31,\n \"ppvCreditLimit\" : 0.00,\n \"ppvRemainingCreditLimit\" : 0.00,\n \"language\" : {\n \"billerCode\" : \"ENGL\",\n \"enterpriseCode\" : \"ENGLISH\",\n \"description\" : \"English\",\n \"longDescription\" : \"English\"\n },\n \"auditCreationDate\" : \"2019-10-31\",\n \"locationType\" : {\n \"billerCode\" : \"J\",\n \"enterpriseCode\" : \"RSSINFAM\",\n \"description\" : \"Single Family Home\",\n \"longDescription\" : \"Single Family Home\"\n },\n \"bulkFlag\" : \"N\",\n \"vipCode\" : {\n \"billerCode\" : \"1\",\n \"enterpriseCode\" : \"OWNER\",\n \"description\" : \"Owner\",\n \"longDescription\" : \"Owner\"\n },\n \"billingDetails\" : [ {\n \"currentStatementId\" : \"151441\",\n \"statementCode\" : \"1\",\n \"cycleDay\" : \"1\",\n \"fromDate\" : \"2020-04-01\",\n \"thruDate\" : \"2019-04-30\",\n \"amountDue\" : 312.31,\n \"frequency\" : {\n \"billerCode\" : \"M\",\n \"enterpriseCode\" : \"MONTHLY\",\n \"description\" : \"Monthly Billing\",\n \"longDescription\" : \"Monthly Billing\"\n },\n \"dunningGroup\" : \"0\",\n \"futureDatedFlag\" : \"N\",\n \"paperlessFlag\" : \"N\",\n \"currentBalance\" : 312.31,\n \"lastPaymentDate\" : \"2020-03-09\",\n \"lastPaymentAmount\" : -27.00,\n \"paymentDueDate\" : \"2020-04-18\",\n \"pendingPayment\" : 0.00,\n \"cycle1Amount\" : 156.08,\n \"cycle2Amount\" : 0.00,\n \"cycle3Amount\" : 0.00,\n \"delinquencyAmount\" : 156.08,\n \"delinquencyStatus\" : {\n \"billerCode\" : \"_\",\n \"enterpriseCode\" : \"NORMAL\",\n \"description\" : \"Normal\",\n \"longDescription\" : \"Normal\"\n },\n \"daysDelinquent\" : 44,\n \"billToName\" : {\n \"first\" : \"xxx raft\"\n },\n \"billToAddress\" : {\n \"line1\" : \"SAGE ST\",\n \"city\" : \"COL\",\n \"state\" : \"SC\",\n \"postalCode\" : \"211051132\"\n },\n \"statementHold\" : {\n \"billerCode\" : \"P\",\n \"enterpriseCode\" : \"PAPERONLY\",\n \"description\" : \"Paper Bill Only\",\n \"longDescription\" : \"Paper Bill Only\"\n },\n \"promiseAmount\" : 0.00,\n \"promiseActivityCode\" : \"41\",\n \"billingCurrentBalance\" : 312.31,\n \"statementBalance\" : 312.31,\n \"electronicFlag\" : \"N\",\n \"adjustedDelinquencyAmount\" : 156.08\n } ]\n },\n \"accountCategory\" : \"Re\",\n \"accountSegment\" : \"Re\"\n },\n \"sourceSystemTimeZone\" : \"-04:00\"\n }\n}", "responseTime": 551 }
We're struggling to do OS patching of our indexer cluster in a reasonable timeframe. It currently takes about 24 hours with the vast majority of that time just waiting on bucket fixups tasks to compl... See more...
We're struggling to do OS patching of our indexer cluster in a reasonable timeframe. It currently takes about 24 hours with the vast majority of that time just waiting on bucket fixups tasks to complete between reboots. Wondering how others are doing it without impacting any searching or filling up index process queues. Our current process: Blast out an apt update && apt upgrade -y && apt autoremove -y to all indexes. Takes about 10 - 15 minutes to complete Blast out a puppet no-noop to all indexers - takes about 5 minutes to complete The for each indexer: splunk offline - takes 5 - 10 minutes reboot - takes < 30 seconds Wait for bucket fix ups to complete - around 30 minutes We've had issues using the rolling restart - it sometimes gets stuck in the middle and you have to bounce the cluster master and it doesn't resume where it left off. It also by default defers saved searches, which effectively disables alerting in our environment for a few hours (we are enabling running saved searches during rolling restarts to address this). Does this just work out of the box for others or are there secret "gold" settings that you've had to tweak? Some information about our environment: 2 sites 24 indexers per site Splunk 7.3.3 buckets only replicate between sites, no intra-site replication factor. Wondering if this is contributing to our problems... we've been looking into increasing storage to account for this. 5ms between sites, multi-10gbit links
I set up a connection, "medicinedb", to serve as an output from Splunk to an SQL server. For testing, I set this up to run every minute. I don't see any activity and any errors when I run this output... See more...
I set up a connection, "medicinedb", to serve as an output from Splunk to an SQL server. For testing, I set this up to run every minute. I don't see any activity and any errors when I run this output. I cannot get Splunk to update records in this third party system. I have verified that: - The Splunk data source is returning records - I am mapping data - I can read from medicinedb - I can make updates using basic SQL queries to medicinedb I can also verify that this works: | from savedsearch:"Medicine - Canonical" | dbxoutput output=medicinedb You can view screenshots here: https://www.dropbox.com/sh/ess43vxnkndftqk/AACmeiekTgZgzp6xLjpyQCBPa?dl=0 Thanks in advance for any suggestions.