All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi I have this table:   customer | city A | NY B | NY A | LA   and I want to replace the value in `customer` to B only when it's  `customer == A` and `city == LA`.  I che... See more...
Hi I have this table:   customer | city A | NY B | NY A | LA   and I want to replace the value in `customer` to B only when it's  `customer == A` and `city == LA`.  I checked this solution but it resulted in deleting all entries beside the one I replaced. ``` eval customer=if(match(customer, "A"), "LA", customer)  
I am looking to create a report to show just a subset of my Universal forwarders.  What I am looking for is an expansion on this that I just cannot seem to get working.  Any assistance appreciated! ... See more...
I am looking to create a report to show just a subset of my Universal forwarders.  What I am looking for is an expansion on this that I just cannot seem to get working.  Any assistance appreciated! | tstats values(sourcetype) AS Sourcetype dc(sourcetype) AS #sourcetypes WHERE index=* by host for just the following indexes:  os, main, tomcat. A great help would be to sort by deployment App (NIX, Unix, Linux) if possible, but I am not seeing anything in the system that shows the source of the data (which App is deployed).  
Hi Splunkers, I need to uninstall existing UF and re install new UF (sounds weird, but I have such case) without allowing to re index  past data again? Is this possible? Need to run new UF as if i... See more...
Hi Splunkers, I need to uninstall existing UF and re install new UF (sounds weird, but I have such case) without allowing to re index  past data again? Is this possible? Need to run new UF as if it wasn't installed newly but is the same one, care of conf files can be taken.
Hi, I am trying to configure CloudTrail via S3 in Splunk Add-on for AWS, but after configuring, it getting failed to validate the certificate, i have already added local certificate in cacert.pem on... See more...
Hi, I am trying to configure CloudTrail via S3 in Splunk Add-on for AWS, but after configuring, it getting failed to validate the certificate, i have already added local certificate in cacert.pem on the following locations. D:\Program Files\Splunk\etc\apps\Splunk_TA_aws\bin\3rdparty\python3\certifi D:\Program Files\Splunk\etc\apps\Splunk_TA_aws\bin\3rdparty\python3\botocore D:\Program Files\Splunk\Python-3.7\Lib\site-packages\certifi D:\Program Files\Splunk\Python-3.7\Lib\site-packages\botocore D:\Program Files\Splunk\Python-3.7\Lib\site-packages\botocore\vendored\requests But as of now no luck, still getting same error. Error:     File "D:\Program Files\Splunk\Python-3.7\lib\http\client.py", line 966, in send self.connect() File "D:\Program Files\Splunk\etc\apps\Splunk_TA_aws\bin\3rdparty\python3\boto\https_connection.py", line 131, in connect ca_certs=self.ca_certs) File "D:\Program Files\Splunk\Python-3.7\lib\ssl.py", line 1238, in wrap_socket suppress_ragged_eofs=suppress_ragged_eofs File "D:\Program Files\Splunk\Python-3.7\lib\ssl.py", line 423, in wrap_socket session=session File "D:\Program Files\Splunk\Python-3.7\lib\ssl.py", line 870, in _create self.do_handshake() File "D:\Program Files\Splunk\Python-3.7\lib\ssl.py", line 1139, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1091)     Regards, Saugata D
Hi, I'm very new to splunklib and not so experienced in programming and breaking my brain on this. I have 2 scripts. First one is creating a list of assets from Server with API requests call, and... See more...
Hi, I'm very new to splunklib and not so experienced in programming and breaking my brain on this. I have 2 scripts. First one is creating a list of assets from Server with API requests call, and saves to the file. Second one is run by custom command, it's calling the first one and then uses generating streaming  command to pass the results from the file to Splunk. Works....Now, I want to pass server IP as an argument along with my custom command instead of having it statically specified in a API call script. I've tried many ways and nothing works for me and just breaks it when trying to use the Option second script not seeing the argument, when trying to call module from the apiscript and add argument in CustomCommand script it's also a no-go, could not find any examples and losing motivation, thinking my design is bad.    #execfile('apiscript.py') subprocess.call('apiscript.py') """ ----- Generating command yields results into splunk ------""" @Configuration() class results(GeneratingCommand): def generate(self): file = '/data/splunk/apps/bin/lookups/assets.csv' with open(file,"r") as f: reader = csv.reader(f,delimiter=',') for tenant,asset in reader: yield {'P_tenants':tenant,'CIDR_Range':asset} dispatch(results, sys.argv, sys.stdin, sys.stdout, __name__)   Any help will be useful. Thanks
Hello Team,   Greetings for the day. In my organization we use Splunk for all types of monitoring and i am stuck on one issue which is breaking the Splunk Agent to report accurate data. I will giv... See more...
Hello Team,   Greetings for the day. In my organization we use Splunk for all types of monitoring and i am stuck on one issue which is breaking the Splunk Agent to report accurate data. I will give little bit of background what's happening. We use SCCM for Imaging Windows Server OS and Splunk Agent is part of the task sequence and it gets installed along with OS and other applications. Till here everything is working. Now actual issue. 1. We build the server and handed over to the respective team. Team is not happy with the Server Name we have used so they raise a request to change it. 2. Once the Windows Engineer change the hostname of the server so Splunk Stops reporting any data to the dashboard. 3. Based on the troubleshooting we found that Splunk agent has one inputs.configuration file at location "C:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf" which doesn't change the value when server was renamed and keeps the record whatever was printed at the time of agent installation. Once we change the inputs.config file for the value in the file "Host = Current Server Name" (Replace Old Name with new one) and restart Splunk Services then it starts reporting the accurate data. For now i have written one PS code which goes to the particular location "C:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf". Reads the value and matches with Hostname. If the value mismatch then it changes it according to the server name. PS Script given at the end. I was wondering if Splunk can do it on behalf of me doing it by code? Thank you for your time in reading this question. (Get-Content "C:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf" -Raw) -replace '( = ).*', " = $1$($env:COMPUTERNAME)" | Set-Content "C:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf"
SPL query to get the ADHOC search or saved search (with user info) which consumed maximum memory and CPU for the past 1 hour
Hello, Using the o365:management:activity logs, I'm trying to create a search where I: Get a list of users and their IP addresses, who failed MFA Get a list of successful login IP addresses we se... See more...
Hello, Using the o365:management:activity logs, I'm trying to create a search where I: Get a list of users and their IP addresses, who failed MFA Get a list of successful login IP addresses we seen before for the users identified in Step 1 and compare the results. Return the list of users and their IP addresses which were not in Step 2. The idea is to return the IP addresses that failed MFA which never had a successful log in before.  What's the best way to develop this search? I initially thought about doing a search where the sub-search would be the following (return a list of users and their IPs who failed MFA):     index=o365 sourcetype=o365:management:activity Workload=AzureActiveDirectory Operation=UserLoginFailed (LogonError=UserStrongAuthClientAuthNRequiredInterrupt OR LogonError=DeviceAuthenticationRequired OR LogonError=DeviceAuthenticationFailed) earliest=-2h@h latest=now | stats count by UserId ClientIP       and then the main search would grab the successful login IP addresses for the users and compare the results. If the IP address that failed MFA was not in the list of successful login IPs, return that IP and user.  I wasn't able to get that to work. Is this possible? Is this the best approach? This appeared to work but the performance is painful:     index=o365 sourcetype=o365:management:activity Workload=AzureActiveDirectory Operation=UserLoginFailed (LogonError=UserStrongAuthClientAuthNRequiredInterrupt OR LogonError=DeviceAuthenticationRequired OR LogonError=DeviceAuthenticationFailed) earliest=-1h@h latest=now NOT [ search index=o365 sourcetype=o365:management:activity Workload=AzureActiveDirectory Operation=UserLoggedIn earliest=-7d latest=-4h@h | dedup ClientIP UserId | table ClientIP UserId] | stats count by ClientIP UserId Operation     Thanks in advance.
Greetings, In https://docs.splunk.com/Documentation/Splunk/8.0.6/Admin/Serverclassconf , the description for serverclass.conf, it details this:   [global] repositoryLocation = <path> * The reposit... See more...
Greetings, In https://docs.splunk.com/Documentation/Splunk/8.0.6/Admin/Serverclassconf , the description for serverclass.conf, it details this:   [global] repositoryLocation = <path> * The repository of applications on the server machine. * Can be overridden at the serverClass level.    It then goes on to describe the serverclass level:   [serverClass:<serverClassName>] * A server class can override all inheritable properties in the [global] stanza. repositoryLocation = <path>   Taking these together, it seems a Deployment Client can only ever receive its apps from one  repositoryLocation at a time.  Is this correct? If so, bummer.  If not, is there a way to set up a Deployment Server to have two repos, say one for inputs, the other for outputs, and have the Client configured like so:   [deployment-client] clientName = apps_from_output_repo.apps_from_inputs_repo   In my head each repo would be its own app in the Deployment Server, and each have its own serverclass.conf, but one app would handle the [global] level, the other the [serverClass] level.
Hi Team, Needs assistance with merging two reports and their query and producing a new query/report having all the content of both the reports.   IPAMv4 Device Networks source=ib:discov... See more...
Hi Team, Needs assistance with merging two reports and their query and producing a new query/report having all the content of both the reports.   IPAMv4 Device Networks source=ib:discovery:switch_port_capacity index=ib_discovery | fillnull value="N/A" | dedup network_view device_ip_address interface_ip_address | join type=inner InterfaceSubnet, network_view [search sourcetype=ib:ipam:network index=ib_ipam | dedup NETWORK, view | rename NETWORK as InterfaceSubnet view as network_view | fields InterfaceSubnet, network_view, allocation] | APPEND [search sourcetype=ib:ipam:network index=ib_ipam | dedup NETWORK, view | rename NETWORK as InterfaceSubnet view as network_view | join type=left InterfaceSubnet, network_view [search source=ib:discovery:switch_port_capacity index=ib_discovery | fields InterfaceSubnet, device_ip_address, network_view] | where isnull(device_ip_address)] | rename InterfaceSubnet as "IPAM Network" allocation as "Utilization %" device_ip_address as "Device IP" interface_ip_address as "Interface IP" device_model as "Device Model" device_vendor as "Device Vendor" device_version as "Device OS Version" device_name as "Device Name" network_view as "Network View" | table "IPAM Network", "Utilization %", "Network View", "Device IP", "Device Name", "Interface IP", "Device Model", "Device Vendor", "Device OS Version"     IP Address Inventory source=ib:ipam:ip_address_inventory index=ib_ipam | sort 0 -_time, +ip(ip_address) | fillnull value="" | dedup network_view ip_address | eval last_discovered_timestamp=strftime(last_discovered_timestamp,"%Y-%m-%d %H:%M:%S") | eval first_discovered_timestamp=strftime(first_discovered_timestamp,"%Y-%m-%d %H:%M:%S") | rename network_view as "Network View" ip_address as "IP Address" discovered_name as "Discovered Name" port_vlan_name as "Vlan Name" port_vlan_number as "Vlan ID" vrf_name as "VRF Name" vrf_description as "VRF Description" vrf_rd as "VRF RD" bgp_as as "BGP AS" first_discovered_timestamp as "First Seen" last_discovered_timestamp as "Last Seen" managed as "Managed" management_platform as "Management Platform" | table "IP Address" "Discovered Name" "First Seen" "Last Seen" "Network View" "Managed" "Management Platform" "Vlan Name" "Vlan ID" "VRF Name" "VRF Description" "VRF RD" "BGP AS"    
We have a set of UF in a private network that is totally isolated from the Deployment server. For forwarder to indexer traffic we will use intermediate forwarders however we would also like to utiliz... See more...
We have a set of UF in a private network that is totally isolated from the Deployment server. For forwarder to indexer traffic we will use intermediate forwarders however we would also like to utilize the deployment server. Is it possible to configure a UF to point to a deployment server through a proxy?
Hey, I have a splunk instance digesting nmap results. Each host that is found on the network generates an event that has information like IP and MAC addresses. How can I formulate a search that wou... See more...
Hey, I have a splunk instance digesting nmap results. Each host that is found on the network generates an event that has information like IP and MAC addresses. How can I formulate a search that would show me MAC addresses that were discovered for the first time in the last day or so? I tried doing something like this:   NOT ([search earliest=-30d latest=-1d | table mac]) | table mac ip_address hostname   But that didn't actually remove any hosts that had been seen before.  
I have a csv lookup that has a column with numerical data (specifically integers).  When I do the lookup, splunk is treating it as string data.  For example if I use this column to sort the numbers 1... See more...
I have a csv lookup that has a column with numerical data (specifically integers).  When I do the lookup, splunk is treating it as string data.  For example if I use this column to sort the numbers 1,2,3,10,11 it sorts as 1,10,11,2,3. Any help is appreciated.
Hi Everyone, I have one requirement. Below are my logs: 2020-09-30T05:03:29.304446Z app_name=api environment=e1c.a.b.controller.FileController : RID:22abe6c4-6eaf-4d47-8c4a-79b2594ea612-of1-team_g... See more...
Hi Everyone, I have one requirement. Below are my logs: 2020-09-30T05:03:29.304446Z app_name=api environment=e1c.a.b.controller.FileController : RID:22abe6c4-6eaf-4d47-8c4a-79b2594ea612-of1-team_g  A_EL_1100:   EVENT RECEIVED FROM SOURCE 2020-09-30T05:00:17.765656Z app_name=api environment=e1c.a.b.controller.FileController : RID:b0d5b62f-080f-4292-a2d1-4991123eecce-of1-team_f  A_EL:ARC_1100:  EVENT RECEIVED FROM SOURCE In the above logs I have field RID(RequestID) like this which is not extracted: RID:22abe6c4-6eaf-4d47-8c4a-79b2594ea612-of1-team_g RID:b0d5b62f-080f-4292-a2d1-4991123eecce-of1-team_f what I want is I want to display the RequestId from a particular source. Like for below RID source is of1-teamg and RequestId is 2abe6c4-6eaf-4d47-8c4a-79b2594ea612.Source is appended at the end of RID. RID:22abe6c4-6eaf-4d47-8c4a-79b2594ea612 of1-team_g(Source) For 2nd RID b0d5b62f-080f-4292-a2d1-4991123eecce-of1-team_f. The source is of1-team_f and RequestId is b0d5b62f-080f-4292-a2d1-4991123eecce. There are multiple Sources and requestId's. I want to display the number of requestId received from a particular source. I only want to display the number of requestId received from a particular source for this pattern "EVENT RECEIVED FROM SOURCE". Like for below example: 2020-09-30T05:03:29.304446Z app_name=api environment=e1c.a.b.controller.FileController : RID:22abe6c4-6eaf-4d47-8c4a-79b2594ea612-of1-team_g  A_EL_1100:   EVENT RECEIVED FROM SOURCE RID-22abe6c4-6eaf-4d47-8c4a-79b2594ea612 Source - of1-team_g Pattern - EVENT RECEIVED FROM SOURCE Nothing is extracted as of now. Can some one guide me with the search query Number of RequestId Received from source for particular pattern.
I have some saved Search Macros which I'd like to delete.  How is this done?
I'm new to Splunk and would like to know if it's possible to retrieve and monitor hardware status. When I search the data I have I can find logs when a threshold has been passed, like "temperature hi... See more...
I'm new to Splunk and would like to know if it's possible to retrieve and monitor hardware status. When I search the data I have I can find logs when a threshold has been passed, like "temperature high".  Is it possible to monitor temperature continuously, or CPU usage, memory, disk, etc.? 
Hello Splunkers, I need to reuse universal forwarders which are at earlier were owned by other teams. We are looking forward to take this decision to avoid unnecessary installation and deinstallati... See more...
Hello Splunkers, I need to reuse universal forwarders which are at earlier were owned by other teams. We are looking forward to take this decision to avoid unnecessary installation and deinstallation. I need to change deployment server first is what I understood. But since we have different Splunk Enterprise License , the forwarder license within UF would be different. How can we own the splunk UF by license as well. Is there any way to change it??? Or will it be more efforts to do that instead installing and deinstalling. Or shall we upgrade it, will it work??? Are their any other trade offs we need to review? (ok with fishbucket one)      
Hi Folks, I want find all source and sourcetype for enable notables in Splunk ES. Please advise. Regards, D
Is there any query to get the list of  all indexes under a specific index cluster.
Hello, I am trying to use Splunk's REST API in order to change portions of existing correlation searches created within Enterprise Security. For this test, I created a correlation search called chri... See more...
Hello, I am trying to use Splunk's REST API in order to change portions of existing correlation searches created within Enterprise Security. For this test, I created a correlation search called chris_test. It has a description of "Test correlation search". I would like to modify its description to be "AAA". I try to do this as follows: curl -k -u chris https://essplunk.company.com:8089/servicesNS/chris/SplunkEnterpriseSecuritySuite/saved/searches/Threat%20-%20chris_test%20-%20Rule -d description="EEE" > chris_test.txt I also tried with: -X POST -d description="EEE" In both cases, it doesn't seem to make the update to the correlation search. Can someone help me to better understand what I am doing wrong? Long-term, I'd like to be able to use REST API to update the Next Steps of a notable Adaptive Response via something like: -d action.notable.param.next_steps="DEMO"