All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, is Splunk Supporting Add-on for Active Directory version 3.0.5 is  compatible with python 3 ? Thanks, Krithika    
Hello, I am performing a search in Splunk Cloud but I am getting the following error, does anyone know how to resolve it. The search: index=sf-test-metrics domains NOT "sitefinity.com" NOT "*... See more...
Hello, I am performing a search in Splunk Cloud but I am getting the following error, does anyone know how to resolve it. The search: index=sf-test-metrics domains NOT "sitefinity.com" NOT "*iriscodebasesandbox.cloudapp.net*" NOT "*.bedford*" NOT "*.regression.com*" NOT "dynamicscan.cloudapp.net" NOT "irisdmsitsandbox" NOT "*irisdmuatsandbox*" NOT "*.sandbox.sitefinity.com*" NOT "localhost" domains{}= *\.* | spath path=modulesInfo{} output=x | fields x, clientId, domains{} | mvexpand x | spath input=x | search moduleName="AdminBridge" | where extensionBundlesCount > 0 | dedup clientId | table domains{}, extensionBundlesCount | sort -extensionBundlesCount The error: command.mvexpand: output will be truncated at 37800 results due to excessive memory usage. Memory threshold of 500MB as configured in limits.conf / [mvexpand] / max_mem_usage_mb has been reached.
I need to extract fields from log which is in xml format. Below is the example: <Event> <DateTime>2022-11-10T11:58:41.136Z</DateTime> <IBIC8>CTBAAUSN</IBIC8> <InstanceId>D</InstanceId> <EventCode... See more...
I need to extract fields from log which is in xml format. Below is the example: <Event> <DateTime>2022-11-10T11:58:41.136Z</DateTime> <IBIC8>CTBAAUSN</IBIC8> <InstanceId>D</InstanceId> <EventCode>PAG.NTF.CRL_UPDATE_SUCCESS</EventCode> <Name>CRL update succeeded</Name> <Severity>INFO</Severity> <Class>SECURITY</Class> <Text><![CDATA[CRL was successfully downloaded and validated Context: - URL: https://crlcheck.common.sipn.swift.com:443/SWIFTCA1.crl - Version: 2 - Updated on: Thu Nov 10 21:57:53 AEDT 2022 - Valid till: Sun Nov 13 21:57:53 AEDT 2022 - Issuer: o=swift]]></Text> </Event> I need to extract fields like eventcode,severity,text . How can i extract it as statistical data either by using regular expression or how it is...or there is any way to extract the, Please suggest
Hi All,   We are running with Splunk UF version 8.2.4 in our Linux x64 client machines and we have planned to get them upgraded to the latest version 9.0.2 hence i have downloaded the latest rpm pa... See more...
Hi All,   We are running with Splunk UF version 8.2.4 in our Linux x64 client machines and we have planned to get them upgraded to the latest version 9.0.2 hence i have downloaded the latest rpm package and usually we used to deploy the package using rpm in all our client servers so when we tried to deploy the package using RPM we are getting the below error.   So do we have anything needs to be done from our end before performing the upgrade of Splunk UF on Windows and Linux servers from 8.x to 9.x version?   "/opt/splunkforwarder/etc/auth/ca.pem": already a renewed Splunk certificate: skipping renewal "/opt/splunkforwarder/etc/auth/cacert.pem": already a renewed Splunk certificate: skipping renewal Failed to start mongod. Did not get EOF from mongod after 5 second(s). [App Key Value Store migration] Starting migrate-kvstore. Created version file path=/opt/splunkforwarder/var/run/splunk/kvstore_upgrade/versionFile36 Created version file path=/opt/splunkforwarder/var/run/splunk/kvstore_upgrade/versionFile40 [App Key Value Store migration] Collection data is not available. Starting KV Store storage engine upgrade: Phase 1 (dump) of 2: Failed to migrate to storage engine wiredTiger, reason=Failed to receive response from kvstore error=, service not ready after waiting for timeout=300271ms [App Key Value Store migration] Starting migrate-kvstore. Created version file path=/opt/splunkforwarder/var/run/splunk/kvstore_upgrade/versionFile42 [App Key Value Store migration] Collection data is not available. [DFS] Performing migration. [DFS] Finished migration. [Peer-apps] Performing migration. [Peer-apps] Finished migration. Creating unit file... Current splunk is running as non root, which cannot operate systemd unit files. Please create it manually by 'sudo splunk enable boot-start' later. Failed to create the unit file. Please do it manually later. Systemd unit file installed by user at /etc/systemd/system/SplunkForwarder.service. Configured as systemd managed service.   Nov 09 07:05:49 splunk[135425]: Invalid key in stanza [webhook] in /opt/splunkforwarder/etc/system/default/alert_actions.conf, line 229: enable_allowlist (value: false). Nov 09 07:05:49 splunk[135425]: Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug' Nov 09 07:05:49 splunk[135425]: Checking conf files for problems... Nov 09 07:05:49 splunk[135425]: Done Nov 09 07:05:49 splunk[135425]: Checking default conf files for edits... Nov 09 07:05:49 splunk[135425]: Validating installed files against hashes from '/opt/splunkforwarder/splunkforwarder-9.0.2-17e00c557dc1-linux-2.6-x86_64-manifest' Nov 09 07:05:49 splunk[135425]: PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security Nov 09 07:05:49 splunk[135425]: 2022-11-09 07:05:49.925 -0600 splunkd started (build 17e00c557dc1) pid=135425
Hi Team,  I am a beginner at AppDynamics. Could you please provide the links for Appdynamics video tutorials for beginner to advanced to learning? I need Video tutorials to learn and explore AppD... See more...
Hi Team,  I am a beginner at AppDynamics. Could you please provide the links for Appdynamics video tutorials for beginner to advanced to learning? I need Video tutorials to learn and explore AppDynamics. Thanks Balaraju G
How do I join a search with a list of jobnames from a file DepC_listofjobs.csv. This file has only one column which has unique jobnames.   Below command, if I uncomment the line earliest=-8h i... See more...
How do I join a search with a list of jobnames from a file DepC_listofjobs.csv. This file has only one column which has unique jobnames.   Below command, if I uncomment the line earliest=-8h index=log-13120-prod-c laas_appId="pbmp.prediction*" "Prediction" ```| join [ inputlookup DepC_listofjobs.csv ]```    | bin _time span=1h    | stats dc(predictionId),dc(jobName), count by _time  predictionStatus
I have a search with timechart, If I use the VIS column chart I only see the mondays as date on the axis. Is it possible to show every day per column?   index=test host=test* |my  search  | eval ... See more...
I have a search with timechart, If I use the VIS column chart I only see the mondays as date on the axis. Is it possible to show every day per column?   index=test host=test* |my  search  | eval date=strptime(date,"%A%m/%d/%Y") | timechart span=1d count  
Hi, I wanted to see if there is anyway we can store credentials in Phantom which is not visible to the users within Phantom, but we can somehow fetch those credentials in Custom function within our... See more...
Hi, I wanted to see if there is anyway we can store credentials in Phantom which is not visible to the users within Phantom, but we can somehow fetch those credentials in Custom function within our playbook to do some API calls which need those credentials. .....? Note:- built in Phantom app's such as HTTP does not meet our requirements, therefore we're using python request module to make API calls within custom function to achieve this Thanks!
Hello community! For a project I'm working on, I would need to split a "multivalues single event" into "multiple single value events". Let me try to represent it for clarity: As is: Event: A ... See more...
Hello community! For a project I'm working on, I would need to split a "multivalues single event" into "multiple single value events". Let me try to represent it for clarity: As is: Event: A B C null D E F G H I L Desired: Event1: A B C null D Event2: A B E F G Event3: A B H I L  Is it possible to achieve this?  I played around with mvexpand and mvzip but I wasn't able to reach my goal. Thanks in advance for your kind support
Good afternoon, I have already raised a similar topic. The last time I was cleared up the situation, but the problem has not been resolved. I send messages to /services/collector/raw And splunk tak... See more...
Good afternoon, I have already raised a similar topic. The last time I was cleared up the situation, but the problem has not been resolved. I send messages to /services/collector/raw And splunk takes the value from my "eventTime" field: "1985-04-12T23:21:15Z", and substitutes _time, which breaks my normal search and the normal operation of alerts. sourcetype in my case substitutes splunk itself when checking the token. I tried adjusting the sourcetype but that didn't help. In the screenshot, I displayed what I tried to do. Unfortunately, we do not have the ability to always send messages to the Event (( Message example: curl --location --request POST 'http://10.10.10.10:8088/services/collector/raw' --header 'Authorization: Splunk a2-b5-48-9ff7-95r 'Content-Type: text/plain' --data-raw '{ "messageType": "RABIS", "eventTime": "1985-04-12T23:21:15Z", "messageId": "ED280816-E404-444A-A2D9-FFD2D171F447" }'
Hello, For starter, I'm an amateur in regex query, so I use Field Extraction, but it's very clunky and cannot extract all the fields I want and also sometime have wrong extraction. The event that I... See more...
Hello, For starter, I'm an amateur in regex query, so I use Field Extraction, but it's very clunky and cannot extract all the fields I want and also sometime have wrong extraction. The event that I want to extract is 2022-11-09 17:36:05 BANK_CITAD_ID="79303001", SOTIEN_CONLAI="150000000", UPDATED_DATE="2022-11-09 17:36:05.0", FILE_NAME="GTCG_dinhky_20221109.xlsx", STATUS="DATA_ERROR", ERROR_MSG="NOT_ALLOW_LIMIT", SOTIENTANG="0", SOTIENGIAM="0", LOAI_FILE="DK", STT="2", BANK_CODE="STB", ID="6829", NOTE="DC1:2286754104070,1,10/11/2022 00:00:00;DC2:10000000000,1501000000,10/11/2022 00:00:00,1000001;DEL: 1501000001,10/11/2022 00:00:00;DEL: 1501000001,10/11/2022 00:00:00;" There are a total of 8 fields that I want to extract, the field name can be field1, field2..., at least that I can handle. Thank you in advance.  
Hello community, I have a query returning result with an IP address value (src_ip). I used to add a line to match some Range IP :  | where cidrmatch("Range IP", src_ip)   Now I have many other r... See more...
Hello community, I have a query returning result with an IP address value (src_ip). I used to add a line to match some Range IP :  | where cidrmatch("Range IP", src_ip)   Now I have many other range IP to add. Instead of adding many lines, I created a CSV lookup with all these ranges. range_ip | comment ----------------------- 10.0.0.0/8 | range1 11.0.0.0/8 | range2 12.0.0.0/8 | range3   Do you have any idea how can I filter my result using CIDRMATCH function and based on range_ip column of my lookup CSV.    Something like : | where cidrmatch( range_ip IN lookup.csv, src_ip)     Thanks 
Hi, I have a network background and know a little bit syslog and using the Splunk interface. This is however the first time I'm installing a new Splunk instance. I've started up my Splunk cloud t... See more...
Hi, I have a network background and know a little bit syslog and using the Splunk interface. This is however the first time I'm installing a new Splunk instance. I've started up my Splunk cloud trial and spun up SC4S via podman (version 2.39.0) on RHEL 8.2. It seems to be running fine, SC4S messages arriving in Splunk Cloud - - syslog-ng 149 - [meta sequenceId="1"]syslog-ng starting up; version='3.36.1' host = splunk-sc4ssource = sc4ssourcetype = sc4s:events I'm sending syslog from my test ASA to SC4S and with a tcpdump I can see it coming in: 07:46:13.658608 IP xxx.45.78.xxx.syslog > splunk-sc4s.internal.cloudapp.net.syslog: SYSLOG local7.debug, length: 101 07:46:13.735897 IP xxx.45.78.xxx.syslog > splunk-sc4s.internal.cloudapp.net.syslog: SYSLOG local7.info, length: 183 07:46:13.962147 IP xxx.45.78.xxx.syslog > splunk-sc4s.internal.cloudapp.net.syslog: SYSLOG local7.info, length: 155 07:46:16.550565 IP xxx.45.78.xxx.syslog > splunk-sc4s.internal.cloudapp.net.syslog: SYSLOG local7.info, length: 166 Problem is, in Splunk I'm only getting http errors which seems to come from the above syslog messages forwarded by SC4S: - syslog-ng 149 - [meta sequenceId="18"]curl: error sending HTTP request; url='https://prd-p-xxxxxx.splunkcloud.com:8088/services/collector/event', error='Timeout was reached', worker_index='2', driver='d_hec_fmt#0', location='root generator dest_hec:5:5' host = splunk-sc4ssource = sc4ssourcetype = sc4s:events  Things I've configured on SC4S, only the basics: /etc/sysctl.conf net.core.rmem_default = 17039360 net.core.rmem_max = 17039360 net.ipv4.ip_forward = 1 /opt/sc4s/env_file SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://prd-p-xxxxx.splunkcloud.com:8088 SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-9c3c-4918-8eb3-xxxxxxxxxxxxx #Uncomment the following line if using untrusted SSL certificates SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no In splunk I've created the HEC token: I've created the Indexes like this, looking at this everything seems to fall in the "main" index instead of "lastchangeindex", looks like that's not right, is it? I must be missing something obvious here because this should be straightforward right? Appreciate any input on this, thanks!  
Good Morning, I’m looking for the specification of the S2S protocol. We have some trouble with getting splunk uf data through a Palo Alto Firewall.  the firewall has a Application Profile Engine... See more...
Good Morning, I’m looking for the specification of the S2S protocol. We have some trouble with getting splunk uf data through a Palo Alto Firewall.  the firewall has a Application Profile Engine. So it not just looks at layer 4 for IP/Port and Protocol but also certain Aspekts of the packet like headers…. There is no predefined profile for splunk S2S so we need to create it, but I can‘t find the docs for Protocol definition.  We are aware of the fact that it is possible to bypass this App engine and do a layer 4 filtering. But it would be the only application in the company to do that and the fact that it would be for a Internet facing service and a security service seams  off. 🫣 We where adviced to use S2S over HEC for Stability reasons if it does not work out we have to switch but for now we want to try make it work. 
so i created two new indexes on the cluster master and successfully deployed them to the indexing cluster and validated and also show status was successful. But after restarting the server it will no... See more...
so i created two new indexes on the cluster master and successfully deployed them to the indexing cluster and validated and also show status was successful. But after restarting the server it will not start on the GUI. I checked status and splunkd wasn't running. also on the indexers gui i had a message saying: Failed to register with cluster master reason: failed method=POST path=/services/cluster/master/peers/?output_mode=json manager=172.31.80.210:8089 rv=0 gotConnectionError=1 gotUnexpectedStatusCode=0 actual_response_code=502 expected_response_code=2xx status_line="Error connecting: Connection refused" socket_error="Connection refused" remote_error= [ event=addPeer status=retrying AddPeerRequest: { active_bundle_id=049FC0942259F88577FB49C2989F9432 add_type=Initial-Add base_generation_id=0 batch_serialno=1 batch_size=4 forwarderdata_rcv_port=9997 forwarderdata_use_ssl=0 guid=82EFC827-B30A-48D9-ADFA-A2EEC27DB3CA last_complete_generation_id=0 latest_bundle_id=049FC0942259F88577FB49C2989F9432 mgmt_port=8089 register_forwarder_address= register_replication_address= register_search_address= replication_port=8080 replication_use_ssl=0 replications= server_name=indexer_01 site=default splunk_version=8.2.4 splunkd_build_number=87e2dda940d1 status=Up } Batch 1/4 ].11/10/2022, 6:14:24 AM. I've been at it for over 
In search & reporting I get this error and no data from my search: Could not load lookup=LOOKUP-dropdowns. How do I fix this?
"Context":"{"user id":"jane.doe.sen", "Expense Date":"11/10/2022", How to use extract this rex command?      to come up result like  user id Expense Date jane.doe.sen 9/6/2022 ... See more...
"Context":"{"user id":"jane.doe.sen", "Expense Date":"11/10/2022", How to use extract this rex command?      to come up result like  user id Expense Date jane.doe.sen 9/6/2022 please help                                                                                              
Hi guys,   Phantom 4.10.7, I tried to delete containers older than 6 months via delete_containers.pyc and it confirmed counts of affected containers, artifacts and run records as expected, but af... See more...
Hi guys,   Phantom 4.10.7, I tried to delete containers older than 6 months via delete_containers.pyc and it confirmed counts of affected containers, artifacts and run records as expected, but after confirming the deletion and waiting for a few seconds until the command was done, I can still see the containers via UI. If I rerun the delete_containers command again with the same parameters, it says there is nothing there to be deleted. Anyone has any idea of what is going on? I need to housekeep the environment due to the surge of disk usage and there is no better way IMO as this one. Any suggestions are highly appreciated
I know I can use the "rest" command as in the link below to get the list of savedsearches. https://community.splunk.com/t5/Getting-Data-In/Is-there-any-way-to-list-all-the-saved-searches-in-Splunk/m... See more...
I know I can use the "rest" command as in the link below to get the list of savedsearches. https://community.splunk.com/t5/Getting-Data-In/Is-there-any-way-to-list-all-the-saved-searches-in-Splunk/m-p/267333 Since the "rest" command cannot be used in Splunk Cloud, I would like an SPL that can be listed without using that command. It seems that the "rest" command can also be used if i contact Cloud Support, but I don't want to use that command as much as possible! Best Regards.
Hello community, I hope you are doing well. I have a customer that wants to have a Version Control system with Splunk. And the option that I see is Git to be able of do all the Backup and Resto... See more...
Hello community, I hope you are doing well. I have a customer that wants to have a Version Control system with Splunk. And the option that I see is Git to be able of do all the Backup and Restore process. I tried with this app https://splunkbase.splunk.com/app/4355#/overview But this app doesn't work as I expected, because is not able of do the commit automatically and is just for save the Knowledge Objects in another location.   I would like to know if there is any app that helps doing the commit process to the remote repository or this must be done using an script, I would like to avoid the customer do it manually. Thanks for the inputs.   Best Regards.