All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Veeam has a really nice Veeam App for Splunk.  It’s actually one of the nicer apps that has easy data integration and pre-built dashboards that pretty much work out-of-the-box.     However, the Vee... See more...
Veeam has a really nice Veeam App for Splunk.  It’s actually one of the nicer apps that has easy data integration and pre-built dashboards that pretty much work out-of-the-box.     However, the Veeam data is really only usable within the Veeam App.  If you are in a different App in Splunk and try to query the Veeam data a lot of fields will be “missing”.  You can see here that I need use 3 fields (EventGroup, ActivityType, and severity) to find the specific events I’m looking for, but only 1 of those fields is actually availble in the _raw data:   Ok...so why are these fields available in the Veeam App but not in any other App in Splunk, especially since they don’t even actually exist?  This is due to the “enrichment” the Veeam App is performing translating things like “instanceId” into something human-readable and informative.  For example instanceId here is “41600” and when you query the Veeam events there is a lookup that references 41600 and returns additional information:   Great, so if this is available in the Veeam App, why don’t I just do all my work there rather than trying to make this extra information available outside the Veeam App?  The short answer is I want to be able to work with more than one dataset at a time.  The longer answer is that I have a custom “app” where I store all my SOC security detection queries.  Splunk also has their Enterprise Security App which basically does the same thing.   What this allows is the creation of correlated searches, such as one search that picks up any “ransomware” related event regardless of whether it comes from Veeam or AntiVirus or UEBA, etc.   But if the Veeam data isn’t usable outside of the Veeam app you can’t incorporate it into your standard SOC process.     What you need to do is make the all the enrichment in the Veeam App (props, lookups, transforms, datamodels, etc) readable from any App in Splunk, not just from the Veeam App.   You can do all this from the Splunk GUI (you might need to be an Admin...not sure...I’m an Admin so I can do everything/whatever I want LOL  ) Share the Data Model Globally:   Share the enrichment (“props” & “transforms”) Globally:   You can see here before and after snips of the “export” config after I modified all the properties: (default.meta)   (local.meta which overrides defaul created dynamically after edit)
Hi,   For learning purpose Why cant we use personal Mail id for Trial account, tried creating one with gmail but denied.
Hi, We have db connect connections & inputs created in Splunk HF. We see that it has status=FAILED sometimes and below is the error captured through internal  DB logs. Logs: /opt/splunk/var/log/s... See more...
Hi, We have db connect connections & inputs created in Splunk HF. We see that it has status=FAILED sometimes and below is the error captured through internal  DB logs. Logs: /opt/splunk/var/log/splunk/splunk_app_db_connect_job_metrics.log  /opt/splunk/var/log/splunk/splunk_app_db_connect_server.log Error- ERROR org.easybatch.core.job.BatchJob - Unable to write records java.io.IOException: There are no Http Event Collectors available at this time. Can someone help?
Hello, I wish to know the functional difference (if any) between the following: | tstats count FROM datamodel=Endpoint.Processes where Processes.user=SYSTEM by _time span=1h Processes.dest ... And... See more...
Hello, I wish to know the functional difference (if any) between the following: | tstats count FROM datamodel=Endpoint.Processes where Processes.user=SYSTEM by _time span=1h Processes.dest ... And | tstats count FROM datamodel=Endpoint.Processes where Processes.user=SYSTEM by Processes.dest ... | bin _time span=1h I understand the function and that "| bin" would always be used for a non tstats search, but within tstats is there any reason to place the "span" within the "by", or is it just cleaner/slightly faster? Thanks in advance!
  Hello, I am new to content pack and started to check on the service monitoring degradation for KPI, Entities. Have created Services, KPIs and Entities and can see correlation search is finding no... See more...
  Hello, I am new to content pack and started to check on the service monitoring degradation for KPI, Entities. Have created Services, KPIs and Entities and can see correlation search is finding notable event if any KPIs, Entities are having high or critical values. And Neap policies are enabled to create episodes(using the NEAP policy from content pack). but episodes are not getting created. Can someone help how to troubleshoot the issue?   Thanks.
We have a universal forwarder and the customer has a csv file on this machine that he would like to ingest. The customer would like to ingest it as a lookup so I wonder whether we should ingest the c... See more...
We have a universal forwarder and the customer has a csv file on this machine that he would like to ingest. The customer would like to ingest it as a lookup so I wonder whether we should ingest the csv via the UF or potentially, send it via the REST api to be uploaded as a lookup. Does the latter option make sense?
This may be a "dumb" question, but I'll just throw it out there while I try to work it out. The Python for Scientific Computing (PSC) app is HUGE. We have a clustered environment and Search Heads (S... See more...
This may be a "dumb" question, but I'll just throw it out there while I try to work it out. The Python for Scientific Computing (PSC) app is HUGE. We have a clustered environment and Search Heads (SH) receive configuration from our deployer. During initial setup the maximum bundle size was increased to allow pushing the PSC app from the deployer to the SHs. While it worked we've noticed that any push after adding the PSC app to the deployer now takes around 2 minutes to complete, regardless of how small of a change even if no restart is needed. I was hoping there was a way to install the PSC app locally in the search head cluster without going through the deployer. To option to "install from file" is not present in the web UI, assumably since we have a deployer for managing apps. Removing the app from the deployer and consequentially the SH cluster I tried to unpack the app into the /opt/splunk/etc/apps folder but it is then removed/deleted automatically as the cluster is restarted. Presumably since it is not available on the deployment server. So, how should we install and use PSC in a clustered environment? Is the only/correct way to to push the giant app from the deployer or is there another way to distribute the app?  All feedback and/or suggestions are welcome
Hello, After upgrading from Splunk 9.1.0 to 9.4.1, we’ve noticed a display issue affecting all dashboards that use Link List filters at the top. As shown in the screenshots below, the dashboard pa... See more...
Hello, After upgrading from Splunk 9.1.0 to 9.4.1, we’ve noticed a display issue affecting all dashboards that use Link List filters at the top. As shown in the screenshots below, the dashboard panels now appear above the Link List filters, making it difficult or impossible for users to interact with the buttons underneath. Note: Converting the Link List to a Radio Button input resolves the issue, but I'm looking for a way to continue using the Link List as it worked in the previous version. Has anyone experienced this or found a workaround?                 Regards,
Hello. For reasons of JSON log splitting, I have a problem with a complex structure. The integration is in a forwarder (not UF), in transforms.conf.  For example : { "var1":132,"var2":"toto","var... See more...
Hello. For reasons of JSON log splitting, I have a problem with a complex structure. The integration is in a forwarder (not UF), in transforms.conf.  For example : { "var1":132,"var2":"toto","var3":{},"var4":{"A":1,"B":2},"var5":{"C":{"D":5}}} the expected result : "var1":132 "var2":"toto" "var3":{} "var4":{"A":1,"B":2} "var5":{"C":{"D":5}}} Actually I use [extract_message] SOURCE_KEY = field:message REGEX = "([^"]*)":("[^"}]*"|[^,"]*|\d{1,}) FORMAT = $1::$2 REPEAT_MATCH = true WRITE_META = true Online, it works ! That did not match...  
Hi everyone, I'm working with the Splunk Add-on for AWS on Splunk Cloud, and I’ve run into an issue when trying to collect CloudWatch Logs from a cross-account AWS setup. After digging through the ... See more...
Hi everyone, I'm working with the Splunk Add-on for AWS on Splunk Cloud, and I’ve run into an issue when trying to collect CloudWatch Logs from a cross-account AWS setup. After digging through the Python code inside the add-on, I discovered that it uses the logGroupName parameter when calling describe_log_streams() via Boto3. However, in cross-account scenarios, AWS requires the use of logGroupIdentifier (with the full ARN of the log group) — and you can’t use both parameters at the same time.   So, even though AWS allows log collection across accounts using logGroupIdentifier, the current implementation in the add-on makes it impossible to use this feature correctly. I was able to identify the exact line of code that causes the issue and verified that simply replacing "logGroupName" with "logGroupIdentifier" solves the problem. Given that I'm on Splunk Cloud, I have a few questions for those with more experience in similar situations: Is it possible to modify that single line of Python code directly in the official add-on deployed in Splunk Cloud (maybe through the UI or some workaround), or is that completely locked down? I could clone the add-on, patch it, and submit it as a custom app — but would running a custom version of the AWS add-on cause issues with future Splunk Support cases? (i.e., would support be denied for data coming from a modified TA?) More broadly, for anyone who’s set up Splunk in cross-account AWS environments: What’s your recommended approach for collecting CloudWatch Logs in this scenario, given the limitations of the official add-on? Thanks in advance for any insights.
Added the config for the new metadata field in the inputs.conf file and created a fields.conf file to set the field as indexed=true. Still the field is not showing up on SH. This is done for the clou... See more...
Added the config for the new metadata field in the inputs.conf file and created a fields.conf file to set the field as indexed=true. Still the field is not showing up on SH. This is done for the cloud envi inputs.conf [monitor://D:\Splunk\abc\*.csv] disabled = false index = index_abc sourcetype = src_abc _meta = id::123   fields.conf [id] INDEXED=true
I am trying to loop over a table and perform a subsearch for each item. I can confirm I am generating the first table with correct values. However the subsearch portion is not returning any results. ... See more...
I am trying to loop over a table and perform a subsearch for each item. I can confirm I am generating the first table with correct values. However the subsearch portion is not returning any results.  Can someone help me figure out where my query is wrong? Would be very much appreciated! index=xyz "someString" | rex field=msg "DEBUG\s+\|\s+(?<traceid>[a-f0-9-]{36})" | table traceid | map search="search index=xyz \"$traceid$\" AND \"REQUEST BODY\" | rex field=msg \"artifact_guid\":\"(?<artifact_guid>[a-f0-9-]{36})\" | rex field=msg \"email_address\":\"(?<email_address>[^\"]+)\" | table traceid, artifact_guid, email_address"
Hi Team, Currently in my dashboard i am using two separate query for data and search lambda separetly and added to the dashboard 1.I want a combine query which works for both data and search lamb... See more...
Hi Team, Currently in my dashboard i am using two separate query for data and search lambda separetly and added to the dashboard 1.I want a combine query which works for both data and search lambda together an display reult as below GET /data/v1/amaz 1601 GET /search/v1/amaz 159 GET /data/v1/product 3 GET /search/v1/product 186 GET /data/v1/hack 373 GET /data/v1/cb1 1127 GET /search/v1/hack 297   Data lambda query: index=np source IN ("/aws/lambda/p-api-data-test-*") "gemini:streaming:info:*:*:responseTime" | eval Entity = requestType . "/data/" . entity | stats sum(responseTime) as totalResponseTime, avg(responseTime) as avgResponseTime, count as totalTimeBuckets by Entity | eval avgResponseTime = round(avgResponseTime, 2) | rename totalResponseTime as "totalResponseTime(ms)", avgResponseTime as "avgResponseTime(ms)", totalTimeBuckets as "totalTimeBuckets" | table Entity "avgResponseTime(ms)" | sort - "totalResponseTime(ms)" Data lambda Event: { [-] apiResponseTime: 222 awsRequestId: client: Joshu domain: product entity: product hostname: level: 30 msg: gemini:streaming:info:product:data:responseTime pid: 8 queryParams: { [+] } requestType: GET responseTime: 285 time: 2025-05-01T21:59:06.588Z v: 0 } Search lambda: index=np source="/aws/lambda/p-api-search-test-*" "gemini:streaming:info:*:search:response:time" | rex field=source "/aws/lambda/pdp-pc-api-search-test-(?<entity>[^/]+)" | eval Entity = requestType . " search/" . entity | stats sum(responseTime) as totalResponseTime, avg(responseTime) as avgResponseTime, count as totalTimeBuckets by Entity | eval avgResponseTime = round(avgResponseTime, 2) | rename totalResponseTime as "totalResponseTime(ms)", avgResponseTime as "avgResponseTime(ms)", totalTimeBuckets as "totalTimeBuckets" | table Entity "avgResponseTime(ms)" | sort - "totalResponseTime(ms)" Search lambda Event: { [-] apiResponseTime: 146 client: Joshua.Be domain: product entity: amaz level: 30 msg: gemini:streaming:info:amaz:search:response:time pid: 8 queryHits: 50 queryParams: { [+] } requestType: GET responseTime: 149056 time: 2025-05-01T22:01:35.622Z v: 0 } 2.Data api msg: will be: gemini:streaming:info:product:data:responseTime Search api msg: will be: gemini:streaming:info:amaz:search:responseTime so in query i added keyword as "gemini:streaming:info:*:*:responseTime" but througing error as  "The term '"gemini:streaming:info:*:*:responseTime"' contains a wildcard in the middle of a word or string. This might cause inconsistent results if the characters that the wildcard represents include punctuation"  
We've logs coming to HEC as nested JSON in chunks; We're trying to break them down into individual events at the HEC level before indexing them in Splunk. I had some success to remove the header/foot... See more...
We've logs coming to HEC as nested JSON in chunks; We're trying to break them down into individual events at the HEC level before indexing them in Splunk. I had some success to remove the header/footer with props.conf and breaking the events, but it doesn't work completely. Most of the logs are not broken into individual events. Sample events -  { "logs": [ { "type": "https", "timestamp": "2025-03-17T23:55:54.626915Z", "elb": "someELB", "client_ip": "10.xx.xx.xx", "client_port": 123456, "target_ip": "10.xx.xx.xx", "target_port": 123456, "request_processing_time": 0, "target_processing_time": 0.003, "response_processing_time": 0, "elb_status_code": 200, "target_status_code": 200, "received_bytes": 69, "sent_bytes": 3222, "request": "GET https://xyz.com", "user_agent": "-", "ssl_cipher": "ECDHE-RSA-AE", "ssl_protocol": "TLSv1.2", "target_group_arn": "arn:aws:elasticloadbalancing:us-west-2:XXXXX:targetgroup/XXXXX", "trace_id": "Root=XXXX" }, { "type": "https", "timestamp": "2025-03-17T23:56:00.285547Z", "elb": "someELB", "client_ip": "10.xx.xx.xx", "client_port": 123456, "target_ip": "10.xx.xx.xx", "target_port": 123456, "request_processing_time": 0, "target_processing_time": 0.003, "response_processing_time": 0, "elb_status_code": 200, "target_status_code": 200, "received_bytes": 69, "sent_bytes": 3222, "request": "GET https://xyz.com", "user_agent": "-", "ssl_cipher": "ECDHE-RSA-AE", "ssl_protocol": "TLSv1.2", "target_group_arn": "arn:aws:elasticloadbalancing:us-west-2:XXXXX:targetgroup/XXXXX", "trace_id": "Root=XXXX" }, { "type": "https", "timestamp": "2025-03-17T23:57:39.574741Z", "elb": "someELB", "client_ip": "10.xx.xx.xx", "client_port": 123456, "target_ip": "10.xx.xx.xx", "target_port": 123456, "request_processing_time": 0, "target_processing_time": 0.003, "response_processing_time": 0, "elb_status_code": 200, "target_status_code": 200, "received_bytes": 69, "sent_bytes": 3222, "request": "GET https://xyz.com", "user_agent": "-", "ssl_cipher": "ECDHE-RSA-AE", "ssl_protocol": "TLSv1.2", "target_group_arn": "arn:aws:elasticloadbalancing:us-west-2:XXXXX:targetgroup/XXXXX", "trace_id": "XXXX" } ] }  I am trying to get   { "type": "https", "timestamp": "2025-03-17T23:55:54.626915Z", "elb": "someELB", "client_ip": "10.xx.xx.xx", "client_port": 123456, "target_ip": "10.xx.xx.xx", "target_port": 123456, "request_processing_time": 0, "target_processing_time": 0.003, "response_processing_time": 0, "elb_status_code": 200, "target_status_code": 200, "received_bytes": 69, "sent_bytes": 3222, "request": "GET https://xyz.com", "user_agent": "-", "ssl_cipher": "ECDHE-RSA-AE", "ssl_protocol": "TLSv1.2", "target_group_arn": "arn:aws:elasticloadbalancing:us-west-2:XXXXX:targetgroup/XXXXX", "trace_id": "Root=XXXX" } { "type": "https", "timestamp": "2025-03-17T23:56:00.285547Z", "elb": "someELB", "client_ip": "10.xx.xx.xx", "client_port": 123456, "target_ip": "10.xx.xx.xx", "target_port": 123456, "request_processing_time": 0, "target_processing_time": 0.003, "response_processing_time": 0, "elb_status_code": 200, "target_status_code": 200, "received_bytes": 69, "sent_bytes": 3222, "request": "GET https://xyz.com", "user_agent": "-", "ssl_cipher": "ECDHE-RSA-AE", "ssl_protocol": "TLSv1.2", "target_group_arn": "arn:aws:elasticloadbalancing:us-west-2:XXXXX:targetgroup/XXXXX", "trace_id": "Root=XXXX" } { "type": "https", "timestamp": "2025-03-17T23:57:39.574741Z", "elb": "someELB", "client_ip": "10.xx.xx.xx", "client_port": 123456, "target_ip": "10.xx.xx.xx", "target_port": 123456, "request_processing_time": 0, "target_processing_time": 0.003, "response_processing_time": 0, "elb_status_code": 200, "target_status_code": 200, "received_bytes": 69, "sent_bytes": 3222, "request": "GET https://xyz.com", "user_agent": "-", "ssl_cipher": "ECDHE-RSA-AE", "ssl_protocol": "TLSv1.2", "target_group_arn": "arn:aws:elasticloadbalancing:us-west-2:XXXXX:targetgroup/XXXXX", "trace_id": "XXXX" }   props.conf [source::http:lblogs] SHOULD_LINEMERGE = false SEDCMD-remove_prefix = s/^\{\s*\"logs\"\:\s+\[//g SEDCMD-remove_suffix = s/\]\}$//g LINE_BREAKER = \}(,\s+)\{ NO_BINARY_CHECK = true TIME_PREFIX = \"timestamp\":\s+\" pulldown_type = true MAX_TIMESTAMP_LOOKAHEAD = 100 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N TRUNCATE = 1000000   Current Result in Splunk are below in the attached screenshot. The header ({ logs [) and footer are removed from events,  but then split (line break) maybe just working for one event in the chunk and others are ignored.  
I have data like this        id  time  Conatcts x1 4/22/2011 10:00 676689 x1 4/23/2011 11:00       I want it like as shown below : Lw_mesage time is time when conatctid col... See more...
I have data like this        id  time  Conatcts x1 4/22/2011 10:00 676689 x1 4/23/2011 11:00       I want it like as shown below : Lw_mesage time is time when conatctid column is null  and other when conatcid column has value    id  lw_message_time  standardised  message  x1 4/23/2011 10:00 4/23/2011 11:00
I'm trying to replace the default SSL certs on the deployment server with third-party certs but I'm confused about what it entails. I don't know much about TLS certs, mostly just the basics. I'm foll... See more...
I'm trying to replace the default SSL certs on the deployment server with third-party certs but I'm confused about what it entails. I don't know much about TLS certs, mostly just the basics. I'm following these documents for the deployment server-to-client: https://docs.splunk.com/Documentation/Splunk/latest/Security/StepstoSecuringSplunkwithTLS https://docs.splunk.com/Documentation/Splunk/latest/Security/ConfigTLScertsS2S.  If I make the changes on the deployment server to point to the third-party cert, do I also need to change the cert on the UFs to keep communicating on port 8089?  
I'm continuing to work on dashboards to report on user activity on our application. Going through the knowledgebase, bootcamp slides, and google, trying to determine the best route to report on the v... See more...
I'm continuing to work on dashboards to report on user activity on our application. Going through the knowledgebase, bootcamp slides, and google, trying to determine the best route to report on the values in logs files such as this one.  The dashboards I am creating is showing activity in the various modules, what values are getting select and what is being pulled up. I looked at spath and mvexpand and wasn't getting the results I was hoping for, might have been I wasn't formatting the search correctly and also how green myself and work is to Splunk. Creating field extractions has worked for the most part to pull the specific values I wanted to report but further on, I'm finding incorrect values being pulled in. Below is one such event that's been sanitized and it's in valid JSON format.  I'm trying to do a table event showing the userName, date and time, serverHost, SparklingTypeId, PageSize, and PageNumber. The other values not so much.  Is spath and MV expand along with eval statements the best course? I was using field extractions in a couple other modules but then found incorrect values were being added.  {"auditResultSets":null,"schema":"com","storedProcedureName":"SpongeGetBySearchCriteria","commandText":"com.SpongeGetBySearchCriteria","Locking":null,"commandType":4,"parameters":[{"name":"@RETURN_VALUE","value":0},{"name":"@SpongeTypeId","value":null},{"name":"@CustomerNameStartWith","value":null},{"name":"@IsAssigned","value":null},{"name":"@IsAssignedToIdIsNULL","value":false},{"name":"@SpongeStatusIdsCSV","value":",1,"},{"name":"@RequestingValueId","value":null},{"name":"@RequestingStaffId","value":null},{"name":"@IsParamOther","value":false},{"name":"@AssignedToId","value":null},{"name":"@MALLLocationId","value":8279},{"name":"@AssignedDateFrom","value":null},{"name":"@AssignedDateTo","value":null},{"name":"@RequestDateFrom","value":null},{"name":"@RequestDateTo","value":null},{"name":"@DueDateFrom","value":null},{"name":"@DueDateTo","value":null},{"name":"@ExcludeCustomerFlagTypeIdsCSV","value":",1,"},{"name":"@PageSize","value":25},{"name":"@PageNumber","value":1},{"name":"@SortColumnName","value":"RequestDate"},{"name":"@SortDirection","value":"DESC"},{"name":"@HasAnySparkling","value":null},{"name":"@SparklingTypeId","value":null},{"name":"@SparklingSubTypeId","value":null},{"name":"@SparklingStatusId","value":null},{"name":"@SparklingDateFrom","value":null},{"name":"@SparklingDateTo","value":null},{"name":"@SupervisorId","value":null},{"name":"@Debug","value":null}],"serverIPAddress":"255.255.000.000","serverHost":"WEBSERVER","clientIPAddress":"255.255.255.255","sourceSystem":"WebSite","module":"Vendor.Product.BLL.Community","accessDate":"2025-04-30T15:34:33.3568918-06:00","userId":3231,"userName":"PeterVenkman","traceInformation":[{"type":"Page","class":"Vendor.Product.Web.UI.Website.Community.Operations.SpongeSearch","method":"Page_Load"},{"type":"Manager","class":"Vendor.Product.BLL.Community.SpongeManager","method":"SpongeSearch"}]}  
hello everyone, myself Piyush and I'm new to this Splunk environment. I was getting along with MLTK and Python for scientific computing to develop something for the ongoing Splunk hackathon, but whil... See more...
hello everyone, myself Piyush and I'm new to this Splunk environment. I was getting along with MLTK and Python for scientific computing to develop something for the ongoing Splunk hackathon, but while I have tried several times to install it, it still shows me an XML screen saying the file size is too big. I even deleted and re-downloaded the Python file and uploaded it. However, the issue still persists and while other add ons like MLTK,etc got installed just fine. I'm on Windows and I don't have a clue how to move forward from here as I am learning about the splunk environment on the go.
I added a 30gb renewal license key that is valid from June 14th later this year. Afterward I got a message telling me to restart Splunk. I did that, and now all other licenses are missing from the Li... See more...
I added a 30gb renewal license key that is valid from June 14th later this year. Afterward I got a message telling me to restart Splunk. I did that, and now all other licenses are missing from the License admin console. Anyone experienced this before? Is there a way to recover the old licenses? Running splunk Enterprise 9.2.0.1 on prem on redhat  
I am using the request-snapshots API call.  I would like to know what node the snapshot came from.  The response does not seem to contain that data directly but "callChain" seems close. I've figured... See more...
I am using the request-snapshots API call.  I would like to know what node the snapshot came from.  The response does not seem to contain that data directly but "callChain" seems close. I've figured out that the Component number in the call chain corresponds to a tier and I know how to look up the mapping. There is also a "Th:nnnn" in the call chain, but I don't know what it is.  A thread?  What can I do with that? I know this info exists because it's in the UI. thanks