All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have an HTTP Event Collector input collecting JSON data via syslog forwarder. The syslog-ng message looks like:   body("{ \"source\": \"${.splunk.source}\", \"event\": ${MSG} }")   ... See more...
I have an HTTP Event Collector input collecting JSON data via syslog forwarder. The syslog-ng message looks like:   body("{ \"source\": \"${.splunk.source}\", \"event\": ${MSG} }")   I can see the message and the proper source in my indexer. But time extraction is the problem. Because there is often a delay between the log and the time syslog receives it, I want to use a field in the message to grab the timestamp... A message looks like this:   {"data":"stuff","time_stamp":"2022-05-10 17:14:23Z","value1":"more_stuff"}   So, I create a props.conf on my indexer cluster that looks like:   [my_sourcetype] DATETIME_CONFIG = MAX_TIMESTAMP_LOOKAHEAD = 30 TIME_PREFIX = time_stamp\":\" TIME_FORMAT = %Y-%m-%d %h:%M:%S%Z TZ = UTC   I've confirmed the sourcetype is correct as I define it in the inputs for HEC and they match. But for the life of me, I can't seem to get Splunk to find the time. I've tried looking for errors using this search:   index = _internal log_level = WARN OR log_level =ERROR "timestamp"   (found in an older community post, so thanks to the author) but I find nothing. I tried playing in the UI and creating a new data type (the add new data widget). In that UI, my props.conf should work. But for some reason on the cluster it doesnt. Are there any other troubleshooting steps I can follow? Am I missing something that might help this work better?  
Hi,   I moved the installation Splunk folder by mistake into another folder because Splunk stopped working. Since, I had moved the folder to a different location, tried to copy it back but the co... See more...
Hi,   I moved the installation Splunk folder by mistake into another folder because Splunk stopped working. Since, I had moved the folder to a different location, tried to copy it back but the copy failed and became a partial copy. So I upgraded to the latest version of Splunk and tried to start Splunk.   When I try to start Splunk, I get the following error   Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Text decryption - error in finalizing: No errors in queue AES-GCM Decryption failed! Decryption operation failed: AES-GCM Decryption failed!    Also, it is asking for   All preliminary checks passed. Starting splunk server daemon (splunkd)... Enter PEM pass phrase:   Even if I don't enter anything, the Splunk server starts running but the GUI is not available and gives me 500 internal server error. Not sure what configuration file was changed during the process.   Regards, Pravin  
I had a windows admin create a powershell script for me (requires code-signing, plus app whitelisting complexity), and have configured as a Splunk input.  It's working fine from a powershell prompt, ... See more...
I had a windows admin create a powershell script for me (requires code-signing, plus app whitelisting complexity), and have configured as a Splunk input.  It's working fine from a powershell prompt, and I can see from _internal that Splunk is executing it, but I'm receiving no output. Script:     #main $command = { try { $Response = Invoke-WebRequest -Uri 'www.google.com' $StatusCode = $Response.StatusCode } catch { $StatusCode = $_.Exception.Response.StatusCode.value__ if ($StatusCode -eq $null){ $StatusCode = '000' } } return $StatusCode } $StatusCode = invoke-command -ScriptBlock $command Switch ($StatusCode) { '000' { write-warning ('Web_Connectivity url=www.google.com status=failure status_code={0}' -f $statuscode) -Verbose } default { write-host ('Web_Connectivity url=www.google.com status=success status_code={0}' -f $statuscode) -ForegroundColor Green } }       With this inputs.conf:     [powershell://test-internetaccessSplunk] script = . "$SplunkHome\etc\apps\test_Windows_Scripts\bin\test-internetaccessSplunk.ps1" schedule = */5 * 9-16 * 1-5 sourcetype = Script:Web_Connectivity source = Script:Web_Connectivity index = win_test     *note:  schedule to be updated to once/day once it works. _internal log events:      05-10-2022 09:45:00.0001576-7 INFO Start executing script=. "$SplunkHome\etc\apps\test_Windows_Scripts\bin\test-internetaccessSplunk.ps1" for stanza=test-internetaccessSplunk 05-10-2022 09:45:00.8595184-7 INFO End of executing script=. "$SplunkHome\etc\apps\test_Windows_Scripts\bin\test-internetaccessSplunk.ps1" for stanza=test-internetaccessSplunk, execution_time=0.8593608 seconds        
Hello Experts, I have a transaction query that I am displaying in a table. I am able to get results in a table, however, the results tied in a single transaction appear as single row in the table. ... See more...
Hello Experts, I have a transaction query that I am displaying in a table. I am able to get results in a table, however, the results tied in a single transaction appear as single row in the table. I would like to have them displayed in separate rows, as if they are individual search results. Here's an example: Log Data:   Transaction Id=1, step=1, data_x=dataX1, data_y=dataY1   Transaction Id=1, step=2, data_x=dataX2, data_y=dataY2 How results look like Transaction Id data_x data_y 1 1 dataX1 dataX2 dataY1 dataY2 2 2 ... ...   Need it to look like Transaction Id data_x data_y 1 dataX1 dataY1 1 dataX2 dataY2 2 ... ... 2 ... ...   Any help appreciated. Thanks!  
I am getting this message from salesforce Splunk app Cannot expand lookup field 'UserType' due to a reference cycle in the lookup configuration. Check search.log for details and update the lookup c... See more...
I am getting this message from salesforce Splunk app Cannot expand lookup field 'UserType' due to a reference cycle in the lookup configuration. Check search.log for details and update the lookup configuration to remove the reference cycle.
Hello, I must create a visualization- table. I have several similar tables located at dashboard side by side, the only difference is "Base Level". Client demands the visualization to look aesthetic... See more...
Hello, I must create a visualization- table. I have several similar tables located at dashboard side by side, the only difference is "Base Level". Client demands the visualization to look aesthetically- that means  the values on the column "Priority difference" should be stable, from -4 to 4 (-4, -3, -2, -1, 0, 1, 2, 3 ,4) and when there would be no logs for that, on the right column "Number" the value should be "0"- like at 1st photo, where I added it manually to visualize the case: The current final dashboard. At all the dashboards the values on Priority_difference should be always from -4 to 4, now when there are no logs the rows are just missing: The logs looks like this: Could you kindly please advise?
hello I need to display a bar chart with the site field in x axis For each site, I need to display 2 bar The first bar is the avg of retrans_bytes per site and the second bar is the avg of retr... See more...
hello I need to display a bar chart with the site field in x axis For each site, I need to display 2 bar The first bar is the avg of retrans_bytes per site and the second bar is the avg of retrans_bytes per user (it means the user corresponding to the site) Thats' why I use a subsearch for doing this But I dont succedd to cross the results between the 2 search could you help please?     `index` sourcetype="netp_tcp"" | chart avg(retrans_bytes) as retrans_bytes by site user | append [| search `index` sourcetype="netp_tcp"" | chart avg(retrans_bytes) as retrans_bytes by site ]      
Hi Team, Could you please help me on this request. I have a correlation search working fine and need to exclude these 3 filenames (setup64.exe & eggplantaiagent.exe & eggplantaiagentcmd.exe) using ... See more...
Hi Team, Could you please help me on this request. I have a correlation search working fine and need to exclude these 3 filenames (setup64.exe & eggplantaiagent.exe & eggplantaiagentcmd.exe) using the custom trigger condition. Tried with several combinations and it not reflecting in the notables triggered. PFB main query and custom trigger condition for your reference and guide me how to feed the query by suppressing it in the custom trigger condition,   Main query: index=sec_cb category=WARNING | search type=CB_ANALYTICS | search ((threat_cause_threat_category IN("KNOWN_MALWARE", "NEW_MALWARE") OR threat_cause_reputation IN("KNOWN_MALWARE", "PUP", "SUSPECT_MALWARE")) OR reason_code IN(”T_CANARY”)) | search NOT(reason_code IN(T_POL_TERM*) AND NOT(threat_cause_threat_category IN("KNOWN_MALWARE") OR threat_cause_reputation IN("KNOWN_MALWARE"))) | search NOT(reason_code="T_RUN_SYS" AND threat_cause_threat_category="NEW_MALWARE") | search reason_code IN(T_*, R_*) | search NOT [| inputlookup scanner_ip.csv | rename ScannerIP as threat_cause_actor_sha256 ] | search NOT(device_name("AZWPRDWVDIMT*" OR "AZWDRWVDIMT*") AND threat_cause_actor_name="automation controller.xlsm") | stats count values(severity) as severity values(process_name) as process_name values(reason_code) as reason_code values(device_os_version) as os_version values(device_external_ip) as external_ip values(device_internal_ip) as internal_ip values(device_location) as location values(device_username) as username values(threat_cause_threat_category) as threat_category values(threat_cause_reputation) as threat_reputation values(category) as category values(reason) as reason dc(id) as id_dc values(threat_cause_actor_name) as threat_actor values(create_time) as time by device_name id sourcetype | rename device_name as hostname | table hostname threat_actor id severity process_name reason_code reason os_version external_ip internal_ip location username threat_category threat_reputation category sourcetype time id_dc count Custom trigger condition: search threat_actor != "setup64.exe"
I would like to make a pie chart which shows the Top 10 tenants by number of hosts and then put everything else under the label "other". Currently, I am doing this: | stats sum(hostsCount) as hosts... See more...
I would like to make a pie chart which shows the Top 10 tenants by number of hosts and then put everything else under the label "other". Currently, I am doing this: | stats sum(hostsCount) as hostsCount by TenantName | sort hostsCount desc     The issue with this is that it truncates the TenantNames to 10000 as shown in the screenshot which makes the "other" category's hostsCount not accurate. There are over 30000 TenantNames/hostsCount. I would like to change this Pie chart to: 1. Display the Top 10  tenants by hostsCount 2. Make a label called "other" and put all the remaining hostsCounts in them so that it displays the accurate percentage/amount. What would be the best way to do this?
Hi, I would like to execute a windows script when a certain event is triggered. I am wondering how to upload a custom action script to a SaaS controller and NOT on-prem controller. All the documenta... See more...
Hi, I would like to execute a windows script when a certain event is triggered. I am wondering how to upload a custom action script to a SaaS controller and NOT on-prem controller. All the documentation does is talk about on Prem controller. Thanks.
We recently started working with metrics data. The application is sending metrics events with the dimensions: component, deployment_id, timestamp_seconds_from_epoch and metrics names: change_comm... See more...
We recently started working with metrics data. The application is sending metrics events with the dimensions: component, deployment_id, timestamp_seconds_from_epoch and metrics names: change_committed, release We are trying to calculate the duration between deployment_id's from the time a change was committed (change_committed) and released (release) based on timestamp_seconds_from_epoch (which is a timestamp in epoch time) We thought the transaction command would be helpful but since we arent leverage _time and instead using a custom time field called timestamp_in_epoch_time, we are having some trouble figuring out the best approach. Here is what we currently have: | mstats avg("change_committed") as change_committed prestats-true WHERE "index"="statsd" span=auto BY deployment_id | table _time deployment_id | append [ | mstats avg("release") as release prestats-true WHERE "index"="statsd" span=auto BY deployment_id | table _time deployment_id Example metrics events: change_committed:1,timestamp_seconds_from_epoch:1651096172,deployment_id:28020 release:1,timestamp_seconds_from_epoch:1651097000,deployment_id:28020 How can we track the duration of timestamp_seconds_from_epoch between a change_committed and release event for each deployment_id?
Good Morning, I'm trialing Splunk Cloud in anticipation of a purchase. I have installed Splunk Enterprise as the deployment server and universal forwarders on three servers. My clients are showing ... See more...
Good Morning, I'm trialing Splunk Cloud in anticipation of a purchase. I have installed Splunk Enterprise as the deployment server and universal forwarders on three servers. My clients are showing up in "Forwarder Management" but I can't seem to get event logs from any servers except the deployment server. I have enabled firewall ports outbound 8089 and inbound 9997 on the deployment server. These are all Server 2019 machines. I have verified inputs.conf is pointing event logs to index:wineventlog but that index locally has 0 results and about 112,000 results on the cloud server. I'm sure it's something simple I'm missing with all the moving parts. Thank you in advance!
Hi, How to increase the database metrics limit reached? We are getting the below notice: "Maximum Custom Metrics reached" I looked at this Article and it did not help: https://community.appdy... See more...
Hi, How to increase the database metrics limit reached? We are getting the below notice: "Maximum Custom Metrics reached" I looked at this Article and it did not help: https://community.appdynamics.com/t5/Knowledge-Base/How-do-I-increase-custom-metric-limits-for-database-monitoring/ta-p/28970 Thanks ^ Edited by @Ryan.Paredez to include more info
Hii, I have a data in the Splunk table like the below image.     Arista     ConsoleRule          Host                    UnknownRule Passed Failed GDTVFVDFVS-BDHF Passed Pass... See more...
Hii, I have a data in the Splunk table like the below image.     Arista     ConsoleRule          Host                    UnknownRule Passed Failed GDTVFVDFVS-BDHF Passed Passed Failed FSSGVDF-BDHF Passed Failed   DGUYSFDF-BDHF Passed Passed Failed     Failed Failed DGUYSFDF-BDHF   Failed Failed DGUYSFDF-BDHF   Needed like below image  AristaConsoleRuleHostUnknownRule Passed Failed GDTVFVDFVS-BDHF Passed Passed Failed FSSGVDF-BDHF Passed Failed Failed DGUYSFDF-BDHF Passed Passed Failed FSSGVDF-BDHF   Failed Failed DGUYSFDF-BDHF   Failed         Can anyone Please Help us, Is there any possible way to achive this.
Currently we're getting data from Azure Cloud which sends certain logs to a event hub our customer set up. then we pull the data from the eventhub just as stated in the documentation with the ms clou... See more...
Currently we're getting data from Azure Cloud which sends certain logs to a event hub our customer set up. then we pull the data from the eventhub just as stated in the documentation with the ms cloud services add-on.  our problem is now that our customer wanted to see some dashboards filled out with the incoming data. normal request we thought so we installed the microsoft azure app for splunk. there we saw nothing.  after further investigation we saw two things: - the incoming data fields are all extracted but horribly named with long strings of names - the sourcetype for all logs (around 7 different ones) is all something like xyz_eventhub which the app understandably doesn't know and can't use.    so my question is how to fix the issue of only having one sourcetype even tho the props/transforms within the cloud services add-on should extract everything perfectly. we currently think about splitting the data with help of regex and props/transforms conf into the needed sourcetypes but I'm like "why the frick doesn't it work in the first place? I mean the vendor is microsoft and not a third party no-name"   glad for any ideas guys!
Hello, I have been given a list of 40 servers in a text file, all servers are separated by commas for example: server1, server2, server3 etc I cant upload the text file to splunk and compare the ... See more...
Hello, I have been given a list of 40 servers in a text file, all servers are separated by commas for example: server1, server2, server3 etc I cant upload the text file to splunk and compare the data that way, so is there a way in the search field i can just list all the servers and search my index? I know i can do OR between each one but im sure there is a quicker way?   Thanks,   Allan
I have the logs in this way :    measures: {       API.V1.WEBS_ENTITLED_PRODUCTS: 296      success: 300    } what can be the query so that i can display the field " API.V1.WEBS_ENTITLED_PRODUC... See more...
I have the logs in this way :    measures: {       API.V1.WEBS_ENTITLED_PRODUCTS: 296      success: 300    } what can be the query so that i can display the field " API.V1.WEBS_ENTITLED_PRODUCTS" and not its value. I want the output as  "API.V1.WEBS_ENTITLED_PRODUCTS"
Hello All, How do I check, how long it took for one of the event to appear in splunk?   By the way, Solved: How do i find out how long it takes Splunk to actu... - Splunk Community didnt help.   ... See more...
Hello All, How do I check, how long it took for one of the event to appear in splunk?   By the way, Solved: How do i find out how long it takes Splunk to actu... - Splunk Community didnt help.   Thank you
Currently, Splunk cloud health is in RED. We are unable to search any query. Please help me to overcome from this circumstance. I changed the saved searches Alert conditions but even though it is n... See more...
Currently, Splunk cloud health is in RED. We are unable to search any query. Please help me to overcome from this circumstance. I changed the saved searches Alert conditions but even though it is not helping.   
Hi, I am currently facing an issue where my Splunk Universal Forwarder is able to establish connection with the Splunk Server but it is unable to port over the data from the target folder of interes... See more...
Hi, I am currently facing an issue where my Splunk Universal Forwarder is able to establish connection with the Splunk Server but it is unable to port over the data from the target folder of interest. Is there a way to trouble shoot this? A diagnostic test of index="_internal" would show that Splunk is streaming in system logs from my PC, thus proving that a link has already been established with the Splunk Server. However, trying to query using index="ForwarderText_index" (my target index for the targeted files), would yield nothing. Splunk Universal Forwarder Installation Configuration Details: Server: MyServerName Port/Management Port: 8089 (default) Target Folder: C:\Users\MyUserName\Documents\MyProject\logs\Splunk_Monitoring_Folder _______________________________________ inputs.conf location: C:\Program Files\SplunkUniversalForwarder\etc\system\local File contents: [monitor://C:\Users\cftfda01\Documents\MyProject\logs\Splunk_Monitoring_Folder\SubFolder01] disabled = false index = ForwarderText_index host = MyComputerID   _______________________________________ outputs.conf location: C:\Program Files\SplunkUniversalForwarder\etc\system\local [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server =MyServerName:9997 [tcpout-server://MyServerName:9997]