All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I get this error when trying to launch the Atlassian JIRA Issue Alert App.. https://splunkbase.splunk.com/app/5037/ It works well on another server that I have though, but no idea why. Any ideas wha... See more...
I get this error when trying to launch the Atlassian JIRA Issue Alert App.. https://splunkbase.splunk.com/app/5037/ It works well on another server that I have though, but no idea why. Any ideas what to check?  The call triggering this is following: https://hostname:port/en-GB/splunkd/__raw/servicesNS/nobody/TA-atlassian-jira-issue-alerts/TA_atlassian_jira_issue_alerts_settings/proxy?output_mode=json&_=1605256012447  The error in splunkd.log:  ERROR AdminManagerExternal - Received malformed XML from external handler: hello world\n1267650600228229401496703205376\n<eai><eai_settings><ACTION_CRED>--cred--</ACTION_CRED><appName>TA-atlassian-jira-issue-alerts</appName><callerArgs><id>proxy</id><args/></callerArgs><capabilityRead></capabilityRead><capabilityWrite></capabilityWrite><context>2</context><customAction></customAction><customActionCap></customActionCap><didFilter>False</didFilter><didPaginate>False</didPaginate><docShowEntry>True</docShowEntry><endpoint>&lt;splunktaucclib.rest_handler.endpoint.MultipleModel object at 0x7f105c49a490&gt;</endpoint><handler>&lt;splunktaucclib.rest_handler.handler.RestHandler object at 0x7f105c3a3310&gt;</handler><maxCount>30</maxCount><payload></payload><posOffset>0</posOffset><requestedAction>2</requestedAction><requestedFilters></requestedFilters><restartRequired>False</restartRequired><shouldAutoList>True</shouldAutoList><shouldFilter>False</shouldFilter><shouldReload>False</shouldReload><sortAscending>true</sortAscending><sortByKey>name</sortByKey><supportedArgs><arg name="--cred--"><isRequired>False</isRequired></arg></supportedArgs><userName>nobody</userName></eai_settings><config_info><feed_name></feed_name></config_info></eai>\n
Hello guys, could you let me know the definitions of fields like "is_first_for_luuid" in the Checkpoint logs through OPSEC LEA app? Are there useless fields in your opinion that we could filter fro... See more...
Hello guys, could you let me know the definitions of fields like "is_first_for_luuid" in the Checkpoint logs through OPSEC LEA app? Are there useless fields in your opinion that we could filter from the app GUI like log_version, uuid, rule_uid...? time=1604660518|loc=6427159|fileid=1604657959|action=accept|orig=xxx|i/f_dir=inbound|i/f_name=bond0.89|has_accounting=0|logId=-1|log_type=log|log_sequence_num=0|is_first_for_luuid=131072|log_version=1|uuid=<5fa52d25,0002012f,fd000d0a,c0000005>|product=VPN-1 & FireWall-1|rule=1104|rule_uid={C243B342-AD50-4161-B1FD-26C033DE9D57}|rule_name=150609A|service_id=tcp_8089|src=xxx|s_port=35718|dst=xxx|service=8089|proto=tcp|__policy_id_tag=product=VPN-1 & FireWall-1[db_tag={F5CF3E23-0C69-E441-8BD6-4C39EDD32B61};mgmt=xxx;date=1604590675;policy_name=xxx] Thanks.  
Hi, I am trying to join to log files under same index & sourcetype having a common field between them.  log event where type=dte2_fios has below fields TRANID, ANALYSIS, WPID  (this is common fiel... See more...
Hi, I am trying to join to log files under same index & sourcetype having a common field between them.  log event where type=dte2_fios has below fields TRANID, ANALYSIS, WPID  (this is common field), COMMAND log event where type=dte4_fios has below fields REQID, FT, WPID (this is common field), DIP, DTE,FTC,ERR_MSG I need a table output with below fields and its corresponding value in each row TRANID  WPID, REQID,ANALYSIS ,COMMAND,FT,DIP,DTE,FTC,ERR_MSG   I used below query, but it is giving me multiple values for REQID & FT in single row as one WPID will have multiple REQID & FTs. I need separate row with all the above fields index=delb_np sourcetype="app_kafka_np_east" AND (function_name="dte2_fios" OR function_name="dte4_fios") | table _time @timestamp function_name log_message.WPID log_message.CID log_message.TN log_message.TRANID log_message.REQID log_message.ANALYSIS log_message.COMMAND log_message.DIP log_message.FT | stats values(log_message.TRANID) as TRANID, values(log_message.REQID) as REQID, values(log_message.ANALYSIS) as ANALYSIS, values(log_message.COMMAND) as COMMAND, values(log_message.DIP) as DIP, values(log_message.FT) as FT by log_message.WPID, log_message.CID
Hello All, I have added some custom detection rules (servlet type) using URI HTTP request match criteria. The newly configured transactions are getting shown under "All other traffic " and during ... See more...
Hello All, I have added some custom detection rules (servlet type) using URI HTTP request match criteria. The newly configured transactions are getting shown under "All other traffic " and during registering it is showing some number of calls in the "view traffic details" tab. Under Business Transaction Tab, the newly configured BTs are not showing any data (number of calls, resp time, snapshots, statistics, data point, etc) only the names are reflecting. If I am clicking for details it is showing a popup message which says "can not fetch Transaction diagnostics is ..". Please assist what is the issue here? TIA 
Hello SPlunkers, For DB connect lookup I have reference search with below format, 2020-11-13 01:14:12 * PUT /packages/version/323 - 8080 userX Mozilla/5.0+(Windows+NT+10.0;+Win64;+x64)+AppleWebKit... See more...
Hello SPlunkers, For DB connect lookup I have reference search with below format, 2020-11-13 01:14:12 * PUT /packages/version/323 - 8080 userX Mozilla/5.0+(Windows+NT+10.0;+Win64;+x64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/86.0.4240.111+Safari/537.36 * 200 0 0 31   I have a set lookup SQL as below, Id           Name 323    ApacheMaven   I want to match the ID in from the reference search with the lookup and send out the results as ID, Name, User. Please let me know if we can achieve that
Hi, I'm getting the below warn msg  CsvDataProvider - Unable to find filename property for lookup=* will attempt to use implicit filename.
Does anybody know if there is a list of all fixed issues over time for all Splunk releases. Currently there are is every release a list if them in the respective release notes. But it would be cool i... See more...
Does anybody know if there is a list of all fixed issues over time for all Splunk releases. Currently there are is every release a list if them in the respective release notes. But it would be cool if there would be a complete list which can be searched to find out that a possible fix exists and when it was implemented, or at least which minimum version I have to install to get a possible fix.
Hi there,  When reviewing Splunk events, some events display as below, it splits following text into two events; the second event has no datetime at the beginning, and the log is from log4net with d... See more...
Hi there,  When reviewing Splunk events, some events display as below, it splits following text into two events; the second event has no datetime at the beginning, and the log is from log4net with datetime.  ... FROM dbo.History202011 as QueryHistory Inner Join TmpQueryHistoryFilter tf on (QueryHistory.Email = tf.Email or tf.Email is null) WHERE date >= '2020-11-11' AND date <= '2020-11-12' AND (CreationDate BETWEEN '2020-11-11 16:00' AND '2020-11-12 16:00') AND QueryTypeID IN (1,4) AND CompanyID = 42 Order by QueryHistoryId offset 0 Limit 20 ) A CROSS JOIN TotalQueries B WHERE date >= '2020-11-11' AND date <= '2020-11-12' AND (CreationDate BETWEEN '2020-11-11 16:00' AND '2020-11-12 16:00') AND QueryTypeID IN (1,4) AND CompanyID = 1000 GROUP BY QueryHistory.StatusID, QueryHistory.QueryFromId ,QueryHistory.QueryTypeID ,CASE WHEN QueryHistory.EAScore >=900 THEN 1... When show source, it likes this:   Any thought?    Thanks, 
The issue is for the “PCI Compliance Posture” dashboard the View “Compliance Status History” is not showing data.  It just displays. It just displayed line
Hello Everyone, I have two searches search 1=> index="appv" sourcetype="AppV-User" *PUT /package* search2=> index="appv" sourcetype=sql_appv_packageversion Search 1 events look like below, 2020-... See more...
Hello Everyone, I have two searches search 1=> index="appv" sourcetype="AppV-User" *PUT /package* search2=> index="appv" sourcetype=sql_appv_packageversion Search 1 events look like below, 2020-11-13 01:14:12 IP PUT /packages/version/323 - 8080 **** *** Mozilla/5.0+(Windows+NT+10.0;+Win64;+x64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/86.0.4240.111+Safari/537.36 ** 200 0 0 31   Search 2 events look like below, 2020-11-12 17:58:00.062, Id="323", PackageGuid="*", Name="ApacheMaven-363", VersionGuid="*", VersionNumber="0.0.0.1", Size="0", PackageUrl="\*", UserPolicyFromManifest="<UserConfiguration xmlns:xsd="*" xmlns:xsi="h*" PackageId="*" DisplayName="(App-V Default)" xmlns="*">   now i need to match "323" in both the events and send out an alert. Is that possible? also, there is a timestamp difference between both events
Hi -  I rarely login to a UF locally after the deployment server path is set. (I guess I have been lucky...) However, while tshooting some UF [thruput] limits, I logged into a local server and noti... See more...
Hi -  I rarely login to a UF locally after the deployment server path is set. (I guess I have been lucky...) However, while tshooting some UF [thruput] limits, I logged into a local server and noticed in the UF directories a SplunkUniversalForwarder App... (I must not have paid too much attn to it before...) Per https://docs.splunk.com/Documentation/Forwarder/8.0.6/Forwarder/Configuretheuniversalforwarder The universal forwarder has a SplunkUniversalForwarder app, which includes preconfigured settings that let the forwarder run in a streamlined mode. Do not edit any configuration files within that app unless you receive specific instructions. path>>>  /opt/splunkforwarder/etc/apps/SplunkUniversalForwarder (app) Within the app there are a number of confs including >>> web.conf The first line of the web.conf spec is >>>  This file contains possible attributes and values you can use to configure the Splunk Web interface. I am under the impression this app was created for a reason duplicating / calling out some specific confs that are also found in the usual place like>>>  /opt/SplunkUniversalForwarder/etc/system/default My curiosity is piqued,  and need to ask when would web.conf settings be applied to a UF ? Per https://docs.splunk.com/Documentation/Forwarder/8.0.6/Forwarder/Configuretheuniversalforwarder Because the universal forwarder does not have Splunk Web, you must give the forwarder a configuration either during the installation (on Windows systems only) or later, as a separate step. Has anyone used web.conf settings on a UF?   Or is this web.conf setting in there to make sure the webserver is disabled? [settings] # disable the webserver startwebserver = 0 Thank you.
I have below simple search query , would like to timechart nodename by date/time agaist "gbps", "uplink", "downlink". index=xyz source=xyz* gbps OR uplink OR downlink OR nodename Want visualization... See more...
I have below simple search query , would like to timechart nodename by date/time agaist "gbps", "uplink", "downlink". index=xyz source=xyz* gbps OR uplink OR downlink OR nodename Want visualization Date_Time Nodename gbps downlink uplink                    
Hi I am now creating splunk dashboard, but I am facing a weird result that I am really confused: I tried two queries: query1: * | dedup somthing | timechart count(somthing) as total query2: * | ti... See more...
Hi I am now creating splunk dashboard, but I am facing a weird result that I am really confused: I tried two queries: query1: * | dedup somthing | timechart count(somthing) as total query2: * | timechart distinct_count(something) as total In my understanding, these two queries should give me total number of distinct "something" in every bucket. But the thing is I get different result: for example I use last 7 day time range: I get this: query1: 1,1,0,0,1,1,3 query2: 1,3,0,0,1,4,3   Actually query 2 is correct. I do have 3 and 4 "something" at day2 and day 6.   So what is the difference between these two queries??  
VERSION=8.0.6 ES version= version = 6.1.0 Splunk_DA-ESS_PCICompliance=4.1.0 Issue is for the “PCI Compliance Posture” dashboard the View “Compliance Status History” is not showing data. It just di... See more...
VERSION=8.0.6 ES version= version = 6.1.0 Splunk_DA-ESS_PCICompliance=4.1.0 Issue is for the “PCI Compliance Posture” dashboard the View “Compliance Status History” is not showing data. It just displays "Unable to find tag filtered"
I have result like this,  parameter compliance non-compliance 64bit 43 22 Bios 24 41 Error 0 65 Inter 64 1 OS 61 4   And I want Error to be displayed in the la... See more...
I have result like this,  parameter compliance non-compliance 64bit 43 22 Bios 24 41 Error 0 65 Inter 64 1 OS 61 4   And I want Error to be displayed in the last column, Like below: parameter compliance non-compliance 64bit 43 22 Bios 24 41 Inter 64 1 OS 61 4 Error 0 65   is it something possible in Splunk without CSS or JS. 
Running | makeresults | eval s="foo\nbar"  displays foo\nbar and it is unclear if the variable has a newline or just contains "\n" Is this the right syntax to insert newline (tab)?
  index::my_index host::my_host source::my_source sourcetype::my_sourcetype field1="some value" | stats list(*) AS * BY field2 | search "string"   The above query will fail because the secondary... See more...
  index::my_index host::my_host source::my_source sourcetype::my_sourcetype field1="some value" | stats list(*) AS * BY field2 | search "string"   The above query will fail because the secondary search no longer has the _raw field to search against. How would you perform a string based search across all available fields in the secondary search without the _raw field?
Hello, I am a Splunk newbie and I am having issues using this software. I have gone through documentation, but I still find it challenging. For instance, how does one modify the code that generate... See more...
Hello, I am a Splunk newbie and I am having issues using this software. I have gone through documentation, but I still find it challenging. For instance, how does one modify the code that generates an existing search? I have a search alert called "Test" I can change what happens when the alert is triggered, but do not see how to modify the actually search code (for lack of a better word). I only see save As as Save is whited out. The person who worked with Splunk prior to me created a dashboard that listed servers and the number of times each server appears within a specified period. I would like to create an alert, where if a new server appears, an email alert is sent out if there are more than 100 lines (or logs generated for this particular server)  within an hour period for this new server. Email configuration is fine and works as desired. Can someone guide me as to how to create such an alert? Is there a way to use the code that generates the Dashboard? Any assistance would be greatly appreciated. I have only been working with Splunk for 2 days. Thanks in advance! Mark
I'm trying to combine multiple searches to get a count for each day, but it seems everything I do breaks the search in some way. The metrics I'm trying to combine have no direct relation to each othe... See more...
I'm trying to combine multiple searches to get a count for each day, but it seems everything I do breaks the search in some way. The metrics I'm trying to combine have no direct relation to each other, and logs for each metric are different. As an example, I'm trying to get a total count for calls received, and a count for error codes generated, and then getting those two to display side by side on a bar chart to see a weeks worth by day
Hi everyone, I need help creating a table based on my logs. My logs are formatted as follows:  [2020-11-10 20:27:10,260]INFO - Logging info for Splunk: [2020-11-10 20:27:10,260]INFO - spark_rc=0... See more...
Hi everyone, I need help creating a table based on my logs. My logs are formatted as follows:  [2020-11-10 20:27:10,260]INFO - Logging info for Splunk: [2020-11-10 20:27:10,260]INFO - spark_rc=0 [2020-11-10 20:27:10,260]INFO - status=success [2020-11-10 20:27:10,260]INFO - clientName=foo [2020-11-10 20:27:10,260]INFO - ID=123456 [2020-11-10 20:27:10,260] INFO - dag_ID=dag.py I want to be able to express all this information from all of our logs in a table. For ex:   | Dag_ID |  Client Name |  Status   |spark_rc| |dag.py    |          foo             |  success.     | 0      |   And more rows reading these fields from our other logs as well. For example, I want to see all these fields for our runs in the last 24 hrs. Can someone please help me with how to do this?