All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am reading through https://docs.splunk.com/Documentation/SplunkInvestigate/Current/SearchReference/JoinCommandOverview to construct my search query. query1: | from datamodel.MODELS.EVENT ... See more...
Hi, I am reading through https://docs.splunk.com/Documentation/SplunkInvestigate/Current/SearchReference/JoinCommandOverview to construct my search query. query1: | from datamodel.MODELS.EVENT | where ..............| eval....| eval....|stats....| table Output will have multiple rows with columns: col1, col2, col3, col4, acol1, acol2, acol3 query2: | from datamodel.MODELS.METADATA | where ..............| eval....| eval....|stats....| table Output will have multiple rows with columns: col1, col2, col3, col4, mcol1, mcol2, mcol3 I need to join query1 and query2 on col1, col2, col3, col4 When I tried, it is giving unrecognised AND error. | from datamodel.MODELS.EVENT | where ..............| eval....| eval....|stats....| table | join left=L right=R type=inner where L.col1=R.col1 AND L.col2=R.col2 AND L.col3=R.col3 AND L.col4=R.col4 | from datamodel.MODELS.METADATA | where ..............| eval....| eval....|stats....| table I tried using [AND L.col2=R.col2 AND L.col3=R.col3 AND L.col4=R.col4] as well but it gave an unrecognised | error. Please kindly suggest if there are any other ways ?
Hi All, How could i separate the values of multiselect inputs that came from a drilldown ? The values came from a dashboard click drilldown to the mutiselect field, unfortunately token passed... See more...
Hi All, How could i separate the values of multiselect inputs that came from a drilldown ? The values came from a dashboard click drilldown to the mutiselect field, unfortunately token passed are joined in one box, how could i separate them ? Hope the picture will help
Hi there, I am trying to create a dashboard with some filters.. Roughly: 3 boxes populated and filtered by a lookup or kvstore lookup cat (car manufacturer) - for instance could be car man... See more...
Hi there, I am trying to create a dashboard with some filters.. Roughly: 3 boxes populated and filtered by a lookup or kvstore lookup cat (car manufacturer) - for instance could be car manufacturer ( lets say i chose mercedes) subcat (type) - petrol/diesel/electric (i choose a petrol filter) result (cars listed assoicated with above filters) - (it lists car models from merc that are petrol) but then maybe i wanna go back and have 2 types of filters so i would then go back to "subcat" and choose both "petrol and electric" the result would then list both types filtered into to "result" how can i accomplish this? Thanks!
Hi All, I am planning to upgrade a heavy forwarder from v6.6.6 to v 7.3.3 What should be my approach to upgrade? Can i directly upgrade the HF to v 7.3.3 or, I have to upgrade it to v7.0 and th... See more...
Hi All, I am planning to upgrade a heavy forwarder from v6.6.6 to v 7.3.3 What should be my approach to upgrade? Can i directly upgrade the HF to v 7.3.3 or, I have to upgrade it to v7.0 and then to v7.3.3 Please help. Thanks. Regards, Abhi
I have a clustered splunk environment and monitoring in place for quite a few application logs. Lately , I have been encountering an issue with data collection in Splunk . For some frame of time... See more...
I have a clustered splunk environment and monitoring in place for quite a few application logs. Lately , I have been encountering an issue with data collection in Splunk . For some frame of time everyday(2 to 5 hours) , I do not see any data even though the application server has logs generated. But for the rest of the day it works just fine . Universal Forwarders and indexers are working just fine. This is affecting the dashboards and alerts , as the data is been missed out . Example log: 2020-02-13T05:01:45.249-0500 INFO 801 | UNIQ_ID=2AB2130 | TRANS_ID=00000170151fda6c-171dce8 | VERSION=18.09 | TYPE=AUDIT| UTC_ENTRY=2020-02-13T10:01:45.178Z | UTC_EXIT=2020-02-13T10:01:45.230Z,"Timestamp":"2020-02-13T10:01:45.062Z","Data":{"rsCommand":"","rsStatus":"executed","pqr":"2020-02-13T09:57:13.000Z","rsStatusReason":"executed","XYZ":"2020-02-13T09:57:29.000Z","rsMinutesRemaining":"6","remoDuration":"10","internTemperature":"12","ABC":"2020-02-13T10:00:20.000Z","Sucction"}} Can anyone give some insight ,If you have faced or come across this kind of issue. I suspect Splunk is getting confused with the time format of the actual event and the time and year value format inside the event likeabc,pqr,xyz timestamp in the example log above.. But doesn't help me how to go about and solve this issue.
Hello All, I have been going through Multiple posts but still not able to configure my Splunk Add-on for Cisco ESA. I have some confusion and need your opinion on it. I have a Distributed envir... See more...
Hello All, I have been going through Multiple posts but still not able to configure my Splunk Add-on for Cisco ESA. I have some confusion and need your opinion on it. I have a Distributed environment and have installed Splunk Add-on for Cisco ESA on both Search Head & Deployment Server. The question is: Where should I configure the Inputs (Search Head or Deployment Server). Where should I push the ESA logs (Search Head or Deployment Server). On Cisco ESA, the logs are currently configured through FTP and I was wondering if there is a way to push/share or access these logs or should I use the SCP method. I would greatly appreciate your suggestions. Thanks in advance,
I want to add the app Splunk Dashboard examples to my 7.01 environment. As soon as I select find new apps I get the error message error connection reset by peer
Hi, I have a requirement to customize the report generated in csv format, this is a scheduled report. The report in .csv must have two sheets. First sheet should have splunk logo and time fram... See more...
Hi, I have a requirement to customize the report generated in csv format, this is a scheduled report. The report in .csv must have two sheets. First sheet should have splunk logo and time frame the report is for. Second sheet should contain report name and the data . Is there any way i customize the csv to include the above. Thanks, Ajay
I need to filter the data from below _raw only the SPLUNKXML ="" _raw 2020-02-13 01:04:18.910, COUNT="863132", URL="http://122.32.10:8080/HP/Material", SAD="GET", SPLUNKXML="<APICALL><IPCODE>201... See more...
I need to filter the data from below _raw only the SPLUNKXML ="" _raw 2020-02-13 01:04:18.910, COUNT="863132", URL="http://122.32.10:8080/HP/Material", SAD="GET", SPLUNKXML="<APICALL><IPCODE>201</IPCODE><returnTime>1581573606000</returnTime><data><ULID>049726</ULID><requestId>$658262</requestId><currentStatus>SPlunk - Picked</currentStatus><pickedQuantity><value>634</value><uom>EA</uom><lastUpdateTime>1581399738000</lastUpdateTime></data></APICALL>", IPCODE="111", Timestamp="2020-02-13 01:00:06.75" OUtput needed: SPLUNKXML= "<APICALL><IPCODE>201</IPCODE><returnTime>1581573606000</returnTime><data><ULID>049726</ULID><requestId>$658262</requestId><currentStatus>SPlunk - Picked</currentStatus><pickedQuantity><value>634</value><uom>EA</uom><lastUpdateTime>1581399738000</lastUpdateTime></data></APICALL>"
Good Morning, I am implementing Infoblox logs in Splunk and it is giving me problems. I have 3 Splunk machines, one is the Forwarder, another the Indexer and the other the Searcher. Both Forwarded... See more...
Good Morning, I am implementing Infoblox logs in Splunk and it is giving me problems. I have 3 Splunk machines, one is the Forwarder, another the Indexer and the other the Searcher. Both Forwarded and Searcher have Web App, the indexer on the other hand only works for CLI. In the Forward machine, I installed Infoblox ActiveTrust Cloud Input Add-On, so that you can enter the logs into Splunk. On the other hand, on the Searcher machine, I installed Infoblox ActiveTrust Cloud, which takes care of the visualization part. In order for these 2 machines to be connected to each other, I had to create an index by hand in the Indexer through the CLI. The problem comes when I already receive the logs in the Searcher, but they are full of errors like the ones attached in the following image: Would anyone know how to explain if the problem comes from Splunk, when it comes to parsing the information, or instead, it comes from Infoblox when sending the logs, or even an error when creating the index by hand with the console (CLI)? Greetings and thank you, Carlos.
We are currently trying to set up a reliable solution for moving data from Splunk to HDFS location. This is not for archiving. We would like to move the data to HDFS location so that we can further p... See more...
We are currently trying to set up a reliable solution for moving data from Splunk to HDFS location. This is not for archiving. We would like to move the data to HDFS location so that we can further process the data in the HDFS cluster using Apache Spark processing framework. We have looked at these options Forward data from Splunk HF to Apache Nifi Syslog processor to push the data to HDFS Forward data from Splunk HF to Apache Nifi TcpListener processor to push the data to HDFS Splunk Hadoop connect (After looking at Splunk documentation, it looks like this plug-in does not work with the latest versions) Splunk DSP where the data will be moved directly to Kafka and from there move to HDFS Thanks in advance Manu Mukundan
I have a clustered splunk environment and monitoring in place for quite a few application logs. Lately , I have been encountering an issue with data collection in Splunk . For some frame of time e... See more...
I have a clustered splunk environment and monitoring in place for quite a few application logs. Lately , I have been encountering an issue with data collection in Splunk . For some frame of time everyday(2 to 5 hours) , I do not see any data even though the application server has logs generated. But for the rest of the day it works just fine . Universal Forwarders and indexers are working just fine. This is affecting the dashboards and alerts , as the data is been missed out . Example log: 2020-02-13T05:01:45.249-0500 INFO 801 | UNIQ_ID=20200213050500000170151fda6c-171dcee | TRANS_ID=000001da6c-171dce8 | VERSION=1.09 | TYPE=AUDIT | INTERNAL_ERROR_MSG= | UTC_ENTRY=2020-02-13T10:05.178Z | UTC_EXIT=2020-02-13T10:01:45.230Z,"Timestamp":"2020-02-13T10:01:45.062Z","Organization":"abc","Region":"RStS","ApplicationName":"Anoid"},"Data":{"rsCommand":"Clization","rsStatus":"executed","statusTimeStamp":"2020-02-13T09:57:13.000Z","rsStatusReason":"executed","lastRemoTimeStamp":"2020-02-13T09:57:29.000Z","rsMinutesRemaining":"6","remoDuration":"10","interTemperature":"12","interTimeStamp":"2020-02-13T10:00:20.000Z","Successful Execution"}} Can anyone give some insight ,If you have faced or come across this kind of issue. I suspect Splunk is getting confused with the time format of the actual event and the time and year value format inside the event like status time stamp , last remo timestamp in the example log above.. But doesn't help me how to go about and solve this issue.
Prior to updating to Splunk Enterprise 8.0.2 scheduled accelerated reports ran extremely fast: Report A Duration: 37.166 Record count: 314 After updating to Splunk Enterprise 8.0.2 the report ra... See more...
Prior to updating to Splunk Enterprise 8.0.2 scheduled accelerated reports ran extremely fast: Report A Duration: 37.166 Record count: 314 After updating to Splunk Enterprise 8.0.2 the report ran extremely slow: Report A Duration: 418.621 Record count: 300 Given the patch notes for 8.0.2 – I'm not seeing any changes to acceleration or summary indexing, so is it safe to assume this is a fluke? The massive increase in report generation (job) time of the scheduled accelerated reports appears to be caused by them no longer accessing the corresponding report acceleration summary. The "Access Count" never goes up when the scheduled reports are run. Guess we'll wait for 8.0.3 to fix this. Troubleshooting steps attempted: Manually rebuild Report Acceleration Summaries Delete all affected Report Acceleration Summaries Delete and recreate affected production reports – recreated schedule and checked box for acceleration Check filesystem permissions of inputlookup csv - confirmed -rw-rw-r-- splunk splunk
I have a lookup table that stores employee data to map employee numbers and departments.In the dashboard I will use the following spl, but I don't want the user to query the lookup table or export it... See more...
I have a lookup table that stores employee data to map employee numbers and departments.In the dashboard I will use the following spl, but I don't want the user to query the lookup table or export it separately. Is there any way to solve this problem? index=idx_foo | rename owner.email as user_mail | join type=left user_mail [inputlookup append=t company_emp_all.csv] | fields project, user_name, user_dept
I've very similar javascript as below in my dashboard which adds up the color in the table. As I've updated dashboard.css I cannot utilize XML color palette, so I had to use table cell renderer. r... See more...
I've very similar javascript as below in my dashboard which adds up the color in the table. As I've updated dashboard.css I cannot utilize XML color palette, so I had to use table cell renderer. require([ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!' ], function(_, $, mvc, TableView) { var CustomRangeRenderer = TableView.BaseCellRenderer.extend({ canRender: function(cell) { // Point-C return _(['My Column Name', 'Name']).contains(cell.field); }, render: function($td, cell) { // Point-D if(cell.value=="red" || cell.value=="green" || cell.value=="yellow") { $td.html("<div class='circle_"+cell.value+"'></div>") } else if(cell.value=="NoData" || cell.value=="null") { $td.html("<div class='align_center'></div>") } else { $td.html("<div class='align_center'>"+cell.value+"</div>") } } }); //List of table IDs to add icon var tableIDs = ["Mytable1", "Mytable2"]; for (i=0;i<tableIDs.length;i++) { var sh = mvc.Components.get(tableIDs[i]); if(typeof(sh)!="undefined") { sh.getVisualization(function(tableView) { // Add custom cell renderer and force re-render // Point-A tableView.table.addCellRenderer(new CustomRangeRenderer()); tableView.table.render(); // Point-B }); } } }); My code is able to execute properly in Chrome. While running in Firefox it reaches Point-A and Point-B but not to Point-C or Point-D. Any idea what can be wrong or any workaround. While in Firefox also the issue is intermittent, at some random time out of two tables any one loads as well, but not all the time. Anyone have any idea why TableCellRenderer does not work in firefox sometimes?
Hi, We are using Splunk to query the LoginHistory object from our Salesforce org. In the login report, there are two fields : UserId and UserAccountId. May I know what values do these two fields r... See more...
Hi, We are using Splunk to query the LoginHistory object from our Salesforce org. In the login report, there are two fields : UserId and UserAccountId. May I know what values do these two fields refer to? Sometimes they have same values, sometimes they have different values. Per following release note from Splunk AddOns, it stated "Version 2.0.0 of the Splunk Add-on for Salesforce supports multiple accounts or custom endpoints. Therefore, there is a new field in version 2.0.0 called UserAccountId." https://docs.splunk.com/Documentation/AddOns/released/Salesforce/Releasehistory What is this UserAccountId refer to in a LoginHistory record? Thanks, Aryne
I have a host sending log data and I am wanting to exclude a specific directory from being ingested and/or indexed but no matter what I try, the data continues to appear. I am using a heavy forwar... See more...
I have a host sending log data and I am wanting to exclude a specific directory from being ingested and/or indexed but no matter what I try, the data continues to appear. I am using a heavy forwarder that is acting as my config server for the agent and I have the indexer on another instance. The source to be excluded is "/var/log/lsyncd/lsyncd-status.log" but I['m logging to exclude the "/var/log/lsyncd" directory I have tried adding the following to $SPLUNK/apps/Splunk_TA_nix/local/inputs.conf on both the forwarder and indexer both but the data continues to flow: monitor:///var/log/lsyncd disabled = false I have also tried adding a blacklist option using blacklist=(*.log) but again without the desired result. What am I missing or how should I be configuring this?
Hello, I want from Splunk search results run external command on the field and return results back to splunk, following is the query splunk query -> index="test" sourcetype=_json | xmlkv | fields... See more...
Hello, I want from Splunk search results run external command on the field and return results back to splunk, following is the query splunk query -> index="test" sourcetype=_json | xmlkv | fields httpRequestBody | table httpRequestBody HttpRequestBody is json request, I want to run xsd validator which I already have on the json using external tool and return the results of the validator back to splunk, any advice on how can this be done ?
I have a trellis view where I break down my charts into Cities. The labels are something like 'Charlotte, NC'. I can make a drilldown to my details page using the form.city=$trellis.value$. The ... See more...
I have a trellis view where I break down my charts into Cities. The labels are something like 'Charlotte, NC'. I can make a drilldown to my details page using the form.city=$trellis.value$. The problem is now I want to improve the performance on my target page. It currently is pulling data for all 100 of my cities then filtering by the city name using a lookup table to convert 'Chartlotte, NC' to 'clt' which I can then apply to a hostname filter. index=data sourcetype=searchdata "string" | eval fields=split(host, "."), market=mvindex(fields, 1) | lookup sitemapping sitecode as market OUTPUT region, sitecity, sitecode | search sitecity="Charlotte, NC" | ... What I would like to do is use tag::host="clt" so that I can filter the records in the initial search. One option is to extract the code somehow from the Trellis, the other is to convert from the label to the code in my query before I do the search part. I tried putting an inputlookup before the search, but that ends up filtering out all the data due to the results of the inputlookup. | inputlookup market-mapping | search sitecity="Charlotte, NC" | fields sitecode | search index=data sourcetype=searchdata "string" tag::host=sitecode The inputlookup by itself returns 'clt' in the example. Running the search by itself returns my data Thanks
I have the following data and i am trying to create a time chart of the data for average duration by channel "_time",duration,CH "2020-02-13 11:30:32.367",275,BOSRetail "2020-02-13 12:47:59.33... See more...
I have the following data and i am trying to create a time chart of the data for average duration by channel "_time",duration,CH "2020-02-13 11:30:32.367",275,BOSRetail "2020-02-13 12:47:59.334",202,LTSBRetail "2020-02-13 11:02:54.025",216,BOSRetail "2020-02-13 11:26:11.459",264,BOSRetail "2020-02-13 11:53:03.636",179,BOSRetail "2020-02-13 11:20:53.384",269,BOSRetail "2020-02-13 10:58:52.428",264,BOSRetail "2020-02-13 09:41:22.445",216,LTSBRetail "2020-02-13 09:56:09.820",233,LTSBRetail "2020-02-13 10:58:13.035",240,LTSBRetail "2020-02-13 11:47:48.664",325,BOSRetail "2020-02-13 12:21:27.147",274,LTSBRetail "2020-02-13 11:18:59.352",235,BOSRetail "2020-02-13 11:23:25.297",257,BOSRetail "2020-02-13 11:03:32.007",274,HalifaxRetail "2020-02-13 11:02:15.745",181,LTSBRetail "2020-02-13 11:47:03.084",264,BOSRetail "2020-02-13 15:28:01.956",260,HalifaxRetail "2020-02-13 11:54:23.306",276,BOSRetail "2020-02-13 11:55:58.454",215,LTSBRetail "2020-02-13 11:00:05.081",240,HalifaxRetail "2020-02-13 11:56:38.345",236,BOSRetail "2020-02-13 11:49:52.787",226,BOSRetail "2020-02-13 15:24:13.651",247,HalifaxRetail "2020-02-13 09:31:26.887",194,LTSBRetail "2020-02-13 11:51:59.928",262,BOSRetail "2020-02-13 11:57:18.917",227,HalifaxRetail "2020-02-13 09:42:04.574",171,LTSBRetail "2020-02-13 15:25:51.943",334,HalifaxRetail for unknown reason the average duration values are not reflecting on the timechart using the below query | timechart span=1h avg(duration) by CH