All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Ninjas! I need help with setting an alert which triggers a php script with results. This script should pass the results to 3rd party system. For example: script.php "date | field1 | field 2... See more...
Hello Ninjas! I need help with setting an alert which triggers a php script with results. This script should pass the results to 3rd party system. For example: script.php "date | field1 | field 2 | _raw "
Salesforce.com Setupで下記エラーが発生しました。 対応策を教えてください。 Encountered the following error while trying to update: Error while posting to url=/servicesNS/nobody/splunk-app-sfdc/sfdc_setup/sfdc_account/sfdc... See more...
Salesforce.com Setupで下記エラーが発生しました。 対応策を教えてください。 Encountered the following error while trying to update: Error while posting to url=/servicesNS/nobody/splunk-app-sfdc/sfdc_setup/sfdc_account/sfdc_account
Hi, I am trying to monitor bandwidth at computers (using Windows and Linux) in a network and send it to Splunkserver via Splunk Universal Forwarder. I need some guidance. Thanks.
Following a super helpful thread here https://answers.splunk.com/answers/129424/how-to-compare-fields-over-multiple-sourcetypes-without-join-append-or-use-of-subsearches.html But I've ran into an i... See more...
Following a super helpful thread here https://answers.splunk.com/answers/129424/how-to-compare-fields-over-multiple-sourcetypes-without-join-append-or-use-of-subsearches.html But I've ran into an issue where when I start to use stats, I always drop one of my indexes, is it possible to use stats and still maintain both indexes, or at least merge the data prior to losing one of them? (index=Index1 sourcetype=Type1) OR (index=Index2) | fields field1 field 2 mac_address dest_mac | eval mac_address=replace(mac_address,"\W","") | eval mac_address=lower(mac_address) | rex field=dest_nt_host "(?[^.]+)." | eval dest_nt_host=lower(dest_nt_host) | eval dest_mac=lower(dest_mac) | stats values(index) as index values(field1) as field1 values(field2) as field 2 values(index) as index values(mac_address) by dest_mac | table dest_mac mac_address field 1 field 2 index essentially whatever state with the BY-clause is the index that's kept, but ideally I'd like to match on dest_mac and mac_address, while pulling field 1 from index 1, and field 2 from index 2 Without the by clause, my data essentially is appending without append, looking like... field 1 dest_mac index 1 field 2 mac_Address index 2 Thanks in advanced!
The first query I run is index=sec_proxy_web sourcetype="bluecoat:proxysg:access:syslog" | top 10 url I have web proxy log and a field url but in url it contains tcp, http and some ad blocker... See more...
The first query I run is index=sec_proxy_web sourcetype="bluecoat:proxysg:access:syslog" | top 10 url I have web proxy log and a field url but in url it contains tcp, http and some ad blocker sites which i want to remove by entering those site. for eg : 1. I need to remove the site edge-chat.facebook.com, 2. I just need to remove tcp or http or https if its present in that url I tried few rex but i was not able to do it. can you please help me how to get an output without this characters.
Hi All, I'm trying to get the IMAP Mailbox app working on our Phantom instance (4.6.19142) and we keep seeing errors. Our config is 'outlook.office365.com:993' as the server/hostname and use SSL i... See more...
Hi All, I'm trying to get the IMAP Mailbox app working on our Phantom instance (4.6.19142) and we keep seeing errors. Our config is 'outlook.office365.com:993' as the server/hostname and use SSL is ticked. Testing Connectivity App 'IMAP' started successfully (id: 1582158372604) on asset: 'imap-splunk-alerts'(id: 14) Loaded action execution configuration Error connecting to server. [Errno -2] Name or service not known. Connectivity test failed. No action executions found. We have no firewall rules in place that may stop the connection. nc -zvw 1 outlook.office365.com 993 Ncat: Version 7.50 ( https://nmap.org/ncat ) Ncat: Connected to 40.100.151.226:993. Ncat: 0 bytes sent, 0 bytes received in 0.07 seconds. What is weird is that we aren't seeing any traffic at all on port 993 or port 143 (when specified) when we run a tcpdump. This should be the simplest thing on earth to get working, but so far a solution has eluded us. Any advice would be gratefully received. Thanks.
Hi This issue is about the machine agent installation. This is what is happening: I already installed the machine agent and got the "Started AppDynamics Machine Agent Successfully" message, howe... See more...
Hi This issue is about the machine agent installation. This is what is happening: I already installed the machine agent and got the "Started AppDynamics Machine Agent Successfully" message, however it looks like it is not reporting so I checked Agents on controller and it is over there, so I tried to make some troubleshooting taking a look at the machine agent status with ./appdynamics-machine-agent status command and I got this "appdynamics-machine-agent is stopped" As the second step I tried to restart it and then stop it again with ./appdynamics-machine-agent stop command and this is the output "Stopping appdynamics-machine-agent: [FAILED]"  Is there any idea what is happening and how to fix it? I appreciate it. Thank you. Best Regards.
Hi,  Can anyone let me how to migrate EUM from on-prem to saas?
I have two set of questions on which I am looking for inputs. 1. I have data from multiple tables for an application. I have onboarded it using db connect (mssql). I have to map the login data in t... See more...
I have two set of questions on which I am looking for inputs. 1. I have data from multiple tables for an application. I have onboarded it using db connect (mssql). I have to map the login data in tables to authentication datamodel. For achieveing this i need data from 2 separate tables (sources) to be joined which will give me valid login information along with other fields required for authentication datamodel. My question is, how do i implement CIM for a multi source data? 2. I would also be interested to understand how do I implement CIM compliance for date where I have to join 2 separate indexes. One way i thought was to use kv lookup for one index and make it automatic lookup for 2nd index and use the fields. This will make the lookup file too huge. Other way is to have a saved search and run it regularly to populate data from one index and use collect command to place it in second index. This again takes me to my first question as to how do i implement CIM for 2 sources in same index.
Below is my search output for the SPL i am running. ` db_1 oracle_test db2_bio oracle_890 n88888 n7777 server_2 n87896 bg8768 j987653 n88888... See more...
Below is my search output for the SPL i am running. ` db_1 oracle_test db2_bio oracle_890 n88888 n7777 server_2 n87896 bg8768 j987653 n88888 n7777 How do i exclude the field records which are identical between 2 fields like in this case -- (n88888 & n7777) I tried using there where clause /Search , but without any success .. SPL used to display fields records which are not identical --- |splunk command | where db_1 != server_2 ( Not wokring ) |splunk Command | fields db_1,server_2 | search db_1 !=server_2 ( Not working ) Any clue/help will be appreciated ?
I have a bunch of sourcetypes which are supposed to contain only valid JSON data. I've been asked to verify that in fact they do contain only json. Is there an easy/elegant way to search to find re... See more...
I have a bunch of sourcetypes which are supposed to contain only valid JSON data. I've been asked to verify that in fact they do contain only json. Is there an easy/elegant way to search to find records which were not well-formed JSON? (ie, records that Splunk can automatically format as a JSON tree.)
Hello, I need to make a query to find from a list of hosts, which ones are still not integrated or sending data to the Splunk platform. I already have the lookup with the total universe of hosts w... See more...
Hello, I need to make a query to find from a list of hosts, which ones are still not integrated or sending data to the Splunk platform. I already have the lookup with the total universe of hosts which should be on the platform. Any help will be appreciated. thanks!
I am facing a challenging issue with the CISCO WSA Add-on version 3.3.0, what happens is that I can not use/rename/EXTRACT/FIELDALIAS/COALESCE to get src_ip field into src from one location. As a ref... See more...
I am facing a challenging issue with the CISCO WSA Add-on version 3.3.0, what happens is that I can not use/rename/EXTRACT/FIELDALIAS/COALESCE to get src_ip field into src from one location. As a reference, I am receiving wsa logs from two different sources/location, and both have the same configuration/OS version, however, if the wsa data comes from location A, the src field works fine with a FIELDALIAS, but if the wsa data comes from location B, the src field never appears, even if I apply a FIELDALIAS or any other action. Same Add-on applies for both locations. Any idea will be highly appreciate it. Current Splunk version: 7.3.4
For the following SYSLOG message (ASA-6-302015), Splunk parses it as follows: %ASA-6-302015: Built outbound UDP connection 425358360 for outside:123.45.67.89/22094 (123.45.67.89/22094) to servers... See more...
For the following SYSLOG message (ASA-6-302015), Splunk parses it as follows: %ASA-6-302015: Built outbound UDP connection 425358360 for outside:123.45.67.89/22094 (123.45.67.89/22094) to servers:172.16.8.136/27316 (98.76.54.32/27316) src_ip = 172.16.8.136 dest_ip = 123.45.67.89 However, for SYSLOG message ASA-6-302016, Splunk parses it in the reverse order: %ASA-6-302016: Teardown UDP connection 425358360 for outside:123.45.67.89/22094 to servers:172.16.8.136/27316 duration 0:02:31 bytes 540020 src_ip = 123.45.67.89 dest_ip = 172.16.8.136 Note that these are still the same connection, identified by the timestamps, ports and connection ID (425358360). Does anyone know how this can be fixed? As this could cause some misleading data. Version info: Splunk Cloud (7.2.9) Splunk Add-on for Cisco ASA (3.4.0.) Here is a screenshot of the issue: Alternate link if they will let you see it: https://i.imgur.com/5LZaNk1.png
I have to upgrade splunk enterprise (from 7.2.6 to 8.0.1 ) and enterprise security (from 5.3.0 to 6.0.0) I am following the next documentation: https://docs.splunk.com/Documentation/Splunk/8.0.1/In... See more...
I have to upgrade splunk enterprise (from 7.2.6 to 8.0.1 ) and enterprise security (from 5.3.0 to 6.0.0) I am following the next documentation: https://docs.splunk.com/Documentation/Splunk/8.0.1/Installation/HowtoupgradeSplunk But here does not talk about the upgrade order, I mean I have to upgrade first the splunk enterprise or first I have to upgrade the enterprise security?
This is a continuation of: (https://answers.splunk.com/answers/804476/compare-the-actual-start-time-to-the-expect-start.html) I have created a dashboard that compares the Actual Start Time with... See more...
This is a continuation of: (https://answers.splunk.com/answers/804476/compare-the-actual-start-time-to-the-expect-start.html) I have created a dashboard that compares the Actual Start Time with the Expected Start Time of a given job. In this dashboard, I would like these highlight conditions to be in effect: job ran on-time (Act. <= Exp.) = highlight green job has not ran yet = highlight yellow job ran late (Act. > Exp.) = highlight red I would like each row (each job name) highlighted based on these conditions. Here is the code for my current Dashboard: <dashboard> <label>Name</label> <row> <panel> <title>Title</title> <table> <title>Title</title> <search> <query>msg.jobName = RLMMTP* | spath "msg.status.Code" | search "msg.status.Code"=*| spath "msg.recordType" | search "msg.Type"=* | spath "msg.message" | search "msg.message"="RECORD PROCESSED" | eval day = strftime(_time, "%d") | stats earliest(timestamp) as startTime, latest(timestamp) as endTime count by msg.jobName | eval startTime=substr(startTime,1,13) | eval ActualStart=strftime(startTime/1000, "%H:%M:%S") | lookup AverageStartTimes.csv msg.jobName as msg.jobName OUTPUT ExpectedStart | table msg.jobName</query> <earliest>-1d@d</earliest> <latest>@d</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color" field="msg.jobName"> <colorPalette type="expression"> if ( ActualStart >= ExpectedStart, "#65A637") if ( ActualStart < ExpectedStart, "#8B0000") </colorPalette> </format> </table> </panel> </row> </dashboard> Upon trying with just simple XML in the Dashboard, it seems I cannot create a condition to highlight only one row at a time, only the whole column. Unfortunately using JS and CSS is currently unavailable for me. Any help is appreciated.
Hello All, I was wondering if there is a way to cleanup the key value pair logging inside of snmptrapd? I am ingesting these logs with a UF and I do not want to perform rex sed from my indexers. T... See more...
Hello All, I was wondering if there is a way to cleanup the key value pair logging inside of snmptrapd? I am ingesting these logs with a UF and I do not want to perform rex sed from my indexers. Thanks. Here is my current format string vi /etc/snmp/snmptrapd.conf format2 Date = %y-%02.2m-%02.2l %02.2h:%02.2j:%02.2k\n%V\n%v\n---\n My logs look like this: CISCO-LWAPP-DOT11-CLIENT-MIB::cldcApMacAddress.'....6C' = mac-address CISCO-LWAPP-DOT11-CLIENT-MIB::cldcClientByIpAddressType.0 = ipv4 CISCO-LWAPP-DOT11-CLIENT-MIB::cldcClientUsername.'@&....' = name CISCO-LWAPP-DOT11-CLIENT-MIB::cldcClientSSID.'@&....' = Employee CISCO-LWAPP-DOT11-CLIENT-MIB::cldcClientSessionID.'@&....' = id CISCO-LWAPP-DOT11-CLIENT-MIB::cldcApMacAddress.'@&....' = mac I would like them to look like this (before ingesting them into Splunk) cldcApMacAddress = mac-address cldcClientByIpAddressType = ipv4 If that isn't possible, I would at least like to remove the random characters (example: "@&...." and "'....6C'"). I am not sure why they are generating.
Hi, i have a setup where a packet broker is sending multiple data streams to a universal forwarder. I need to understand if the traffic is tagged somehow from a particular source (replay a pcap fi... See more...
Hi, i have a setup where a packet broker is sending multiple data streams to a universal forwarder. I need to understand if the traffic is tagged somehow from a particular source (replay a pcap file through packet broker), can I use inputs.conf with the tagged 'field' that will hopefully show a difference so i can send to a specific index or do i need to use props / transforms / outputs? thanks in advance Damindra
Hi Everyone, I am configuring ES SH on DMC . Distributed search » Search peers. but it is failing "replication status =failed". i checked the connectivity from DMC host -> ES SH which looks goo... See more...
Hi Everyone, I am configuring ES SH on DMC . Distributed search » Search peers. but it is failing "replication status =failed". i checked the connectivity from DMC host -> ES SH which looks good. this is below error in _internal logs. 02-19-2020 12:13:38.522 -0500 WARN DistributedPeerManager - Unable to distribute to peer named at uri https://searchPeer_ES_SH:8089 because replication was unsuccessful. ReplicationStatus: Failed - Failure info: failed_because_HTTP_REPLY_ERROR_CODE. Verify connectivity to the search peer, that the search peer is up, and that an adequate level of system resources are available. See the Troubleshooting Manual for more information. Only ES SH(stand alone) is not able to be added to DMC . I am able to add indexers and Other management instances. Please suggest to resolve this. Thanks in advance.
Splunk Enterprise security version 6 having issues we get the errors in incident review with the SA-Threat Intelligence encryption , also unable to load templates when click on content managemen... See more...
Splunk Enterprise security version 6 having issues we get the errors in incident review with the SA-Threat Intelligence encryption , also unable to load templates when click on content management splunk ES version:8.0 splunk ES app:6.1 OS:Centos 8.0