All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

DBConect retrieves data from multiple tables. Regarding the acquisition, the processing time is shifted considering the load on the DB. Also, as for the acquisition method, I want to take only th... See more...
DBConect retrieves data from multiple tables. Regarding the acquisition, the processing time is shifted considering the load on the DB. Also, as for the acquisition method, I want to take only the updated amount by looking at the ID, so I set Rising Input Type The following SQL is executed. SELECT * FROM "zaif". "Public". "Table name" WHERE id>? ORDER BY id ASC However, from the development side There was a request that even if the import process was shifted, the end of the data imported that day should be the same. For example, 5:10, 5:20, 5:30 even if you start the capture process sequentially, I want to finish all data at 5:00. Also on the next day, we want to get the minutes from 5:00 on the previous day (so that there are no missing updates). In that case, please teach us how to process the import.
Trying to add input in dbx or just explore sql, but get error "com.splunk.dbx.exception.NotFoundException: Can not find of type connecction. I use Splunk 8.0.1 with dev license and DBX 3.2.0 I onl... See more...
Trying to add input in dbx or just explore sql, but get error "com.splunk.dbx.exception.NotFoundException: Can not find of type connecction. I use Splunk 8.0.1 with dev license and DBX 3.2.0 I only get columns in timestamp choose. Any ideas how to fix it?
Hello Splunkers , I install collectd in the same server when i install splunk , i want to get the system data from this server the collectd service is active but i can't see any data in splunk... See more...
Hello Splunkers , I install collectd in the same server when i install splunk , i want to get the system data from this server the collectd service is active but i can't see any data in splunk , this is the configuration under collectd.conf : LoadPlugin write_http URL "https://X.X.X.X:8088/services/collector/raw?channel=bfd29681-2722-48c7-8495-05f492dd2bce" Header "Authorization: Splunk bfd29681-2722-48c7-8495-05f492dd2bce" Format "JSON" Metrics true StoreRates true server "X.X.X.X" port "8088" token "bfd29681-2722-48c7-8495-05f492dd2bce" ssl false verifyssl false Any help please ?
Hello all, I want to initialize a token value. That is while dashboard is opened that value should be initialized to the token. I have tried writing the token value inside "init". I want to compute... See more...
Hello all, I want to initialize a token value. That is while dashboard is opened that value should be initialized to the token. I have tried writing the token value inside "init". I want to compute the token value in the "init" method, i tried and there is no change. How can i achieve this?
PROBLEM DESCRIPTION:compare todays time duration with 30 days time duration.If if exceeds threshold(1.2*30days avg)send email alert PLEASE FIND BELOW QUERY.However i see a discrepancy in the 30 day... See more...
PROBLEM DESCRIPTION:compare todays time duration with 30 days time duration.If if exceeds threshold(1.2*30days avg)send email alert PLEASE FIND BELOW QUERY.However i see a discrepancy in the 30 days average for jobs exceeding 60 minutes.Can you advise if i am missing a step index=MY_INDEX sourcetype=foo JOB=JOBNAME earliest= -30d@d latest= now() |dedup JOB,STATUS | eval startTime= case("0"!=(strftime(_time, "%a %B %d %Y %H:%M:%S")) AND STATUS="RUNNING",strftime(_time, "%a %B %d %Y %H:%M:%S")),endTime= case("0"!=(strftime(_time, "%a %B %d %Y %H:%M:%S")) AND STATUS="SUCCESS",strftime(_time, "%a %B %d %Y %H:%M:%S")), terminateTime= case("0"!=(strftime(_time, "%a %B %d %Y %H:%M:%S")) AND STATUS="TERMINATED",strftime(_time, "%a %B %d %Y %H:%M:%S")) | eval sTime=strptime(startTime,"%a %B %d %Y %H:%M:%S") | eval eTime=strptime(endTime,"%a %B %d %Y %H:%M:%S") | eval tTime=strptime(startTime,"%a %B %d %Y %H:%M:%S") | eventstats latest(STATUS) AS STATUS BY JOB | transaction JOB,startTime,endTime | eval e_Time=if(STATUS="TERMINATED" OR eTimeThreshold ,Please find below query.However i see that the average calculated for jobs running over an hour appears to be wrong index=MY_INDEX sourcetype=autosys_demon* JOB="JOBNAME" earliest= -30d@d latest= now() |dedup JOB,STATUS | eval startTime= case("0"!=(strftime(_time, "%a %B %d %Y %H:%M:%S")) AND STATUS="RUNNING",strftime(_time, "%a %B %d %Y %H:%M:%S")),endTime= case("0"!=(strftime(_time, "%a %B %d %Y %H:%M:%S")) AND STATUS="SUCCESS",strftime(_time, "%a %B %d %Y %H:%M:%S")), terminateTime= case("0"!=(strftime(_time, "%a %B %d %Y %H:%M:%S")) AND STATUS="TERMINATED",strftime(_time, "%a %B %d %Y %H:%M:%S")) | eval sTime=strptime(startTime,"%a %B %d %Y %H:%M:%S") | eval eTime=strptime(endTime,"%a %B %d %Y %H:%M:%S") | eval tTime=strptime(startTime,"%a %B %d %Y %H:%M:%S") | eventstats latest(STATUS) AS STATUS BY JOB | transaction JOB,startTime,endTime | eval e_Time=if(STATUS="TERMINATED" OR eTimeThreshold
Most of our dashboard users refresh the dashboard by hitting the refresh button from the browser. I want to implement a functionality that when they hit the refresh button all the tokens are unset... See more...
Most of our dashboard users refresh the dashboard by hitting the refresh button from the browser. I want to implement a functionality that when they hit the refresh button all the tokens are unset and values are set to default just like when we remove the filter values (form.token=tokenValue) from the URL and dashboard gets set to default.
I would like to get the online & offline agents in a dump report with CSV for pdf format. This will help me to do health checks everyday on all agents installed and running. As I found there are fe... See more...
I would like to get the online & offline agents in a dump report with CSV for pdf format. This will help me to do health checks everyday on all agents installed and running. As I found there are few step which will help 1. Using snipping tool 2. Chrome developer tool. Could you suggest an alternative way to collect the list of agents from the controller or dashboard. Regards sathish
Hi Team, i have restarted the Splunk server and i tried to login again with same user name and password. could not able to login ..even i have tried with new user name and password with same resul... See more...
Hi Team, i have restarted the Splunk server and i tried to login again with same user name and password. could not able to login ..even i have tried with new user name and password with same result. i have un installed the splunk enterprise and given user password but i am getting error message saying like "Login failed" can somebody help me on this. Thanks, Venkat
Hello Splunkers, Could you please help me design a solution for the below use case . Usecase:I have one CSV file which contains Service center locations and has fields Name,Address,Status,Rating. ... See more...
Hello Splunkers, Could you please help me design a solution for the below use case . Usecase:I have one CSV file which contains Service center locations and has fields Name,Address,Status,Rating. Now I want , if user updates any field in this file it should reflect in KV store lookup file . What I tried: I have created KV store lookup for KV store CRUD .In dashboard, I have created one form which contains all these input fields, where user can perform CRUD operations for KV store lookup file. Everything is working properly but I am not able to update the CSV file. Is there anyway to update CSV file also when we update KV store lookup? Please help me to find the solution of this usecase.
i have 2 fields with open _issue_timestamp and closed_issue_issue. when the ticket is opened the time will be updated in opened_issue and once action is taken the timestamp will be updated in close... See more...
i have 2 fields with open _issue_timestamp and closed_issue_issue. when the ticket is opened the time will be updated in opened_issue and once action is taken the timestamp will be updated in closed_issue column later point of time. so if we configure DB_input using ID it will not capture the closed_issue column when a new Ticket is opened ? how to over come this issue?
I created a indexes.conf file and placed it in my_all_indexes to keep them in one central location for easy management, which is in the apps folder on the indexer, forwarder, and deployment apps. Now... See more...
I created a indexes.conf file and placed it in my_all_indexes to keep them in one central location for easy management, which is in the apps folder on the indexer, forwarder, and deployment apps. Now all of my indexes are pointing to the same app. Is that normal, if not how do i change it back.? If i just delete it from these locations all the other apps goes away except the systems ones. How/where do I change them back. My environment 1 search head, 1 deployment server, 1 indexer, 1 Heavy forwarder, etc Name Actions Type App Current Size Max Size ? Event Count Status audit Edit Delete Disable Events my_all_indexes 16 MB 488.28 GB 123K 11 days _internal Edit Delete Disable Events my_all_indexes 1.43 GB 488.28 GB 13.5M 2 months _introspection Edit Delete Disable Events mys_all_indexes 1.38 GB 488.28 GB 1.46M 2 months _metrics Edit Delete Disable Metrics my_all_indexes 571 MB 488.28 GB 6.05M 2 months _telemetry Edit Delete Disable Events my_all_indexes 1 MB 488.28 GB msad Edit Delete Disable Events my_all_indexes 1 MB 488.28 GB 0 asa Edit Delete Disable Events my_all_indexes 1 MB 488.28 GB 0 ios Edit Delete Disable Events myall_indexes 1 MB 488.28 GB 0 ise Edit Delete Disable Events myall_indexes 1 MB 488.28 GB 0 linux Edit Delete Disable Events myall_indexes 1 MB 488.28 GB 0 windows Edit Delete Disable Events myall_indexes 1 MB 488.28 GB 0 main Edit Delete Disable Events my_all_indexes 1 MB 488.28 GB 0
Hi  We are using tibco BW in our company and we have did the configuration to monitor Tibco Processes by adding the java properties on the script file. However , when we redeploy the application the... See more...
Hi  We are using tibco BW in our company and we have did the configuration to monitor Tibco Processes by adding the java properties on the script file. However , when we redeploy the application the changes are overwritten  My question is .  is there a workaround we can do to make the java agent sitting added after redeployment automatically. Regards  Rayan 
Hello, i was mentioned that there is a way to create alert macros and use it to expose the info with rest api i can't see any documentation about it. can anyone give me an example ? thanks
Can you please let us know how to get Splunk license utilization for the last 3 months. We would like to know through query.
Hi splunk event receive syslog ,but it didn'nt appear msg type. for example kiwisyslog or 3cdemon splunk only display "Message" thank you
Splunk Cloud We have lookup data that needs to be accessed from Splunk Cloud. This data can either come from an external source, e.g. from curl requests to an external API (a), or from a lookup i... See more...
Splunk Cloud We have lookup data that needs to be accessed from Splunk Cloud. This data can either come from an external source, e.g. from curl requests to an external API (a), or from a lookup in Splunk (b) For (a), this would require curl outbound access - the webtools add on is not Cloud certified and I am not sure if it would be possible to upload this type of command extension For (b) it means getting a static file daily/weekly to a place where it can be pushed into Splunk. The REST API for KV store is not supported in Cloud So, I was thinking of ingesting the data to an index and then having a saved search that writes to the KV store after each upload. That's a bit of a pain as the upload would count to ingestion quotas and it just seems the wrong way to do it. Anyone done this before, or got a better solution?
Dear All, I'm trying to retrieve and parse windows dns log, the sample looks like this: 1/23/2020 11:59:42 PM 0B50 PACKET 000001F5A879FCD0 UDP Snd 10.231.150.89 b40e R Q [8081 DR NOER... See more...
Dear All, I'm trying to retrieve and parse windows dns log, the sample looks like this: 1/23/2020 11:59:42 PM 0B50 PACKET 000001F5A879FCD0 UDP Snd 10.231.150.89 b40e R Q [8081 DR NOERROR] A (3)www(15)msftconnecttest(3)com(0) After installed plunk Add-on for Microsoft Windows DNS , it can automatically extract filed query = (3)www(15)msftconnecttest(3)com(0). But the query name looks very strange, the real name should be www.msftconnecttest.com . So my question is , how to parse or transform the query name into correct format. maybe need to write some regular expression or something, but i'm not good at it.
Hi, I'm trying to configure a NEAT that would send one email / raise one SNOW incident for each episodes. I tried a few different Action Rules: Number of events in episode >= 1 --> this would... See more...
Hi, I'm trying to configure a NEAT that would send one email / raise one SNOW incident for each episodes. I tried a few different Action Rules: Number of events in episode >= 1 --> this would send emails for every notable events instead of one for the episode, and will continue sending emails until the episode breaks Number of events in episode == 1 --> this does not trigger emails, since the episodes would typically have 3-4 events I have a different NEAP for a different type of alert where it would raise the incident correctly after the 3rd (same) event e.g. after 15 minutes at 5 mins search interval - by using: - Number of events in episode == 3 In this case though, the events are generated all at once, and there could be 1-8 events from different environments that I'm aggregating to one episode. Regards
Hello, I have complex json being written to splunk and want to do model file validation , what is the best way to do this in splunk for each of the json data written to spunk ? apart from checking js... See more...
Hello, I have complex json being written to splunk and want to do model file validation , what is the best way to do this in splunk for each of the json data written to spunk ? apart from checking json matches model structure, want to check for mandatory values for some fields and format matching for some fields, can this be done inside splunk ? { "TestTransaction":{ "OrderEntryType":141, "Number":69909696, "CloseDate":"2020-02-03T15:31:38.1260000Z", "ab":"test", "Trans":[ { "Amt":5.45, "Desc":"test card", "Id":"961071022758064128", "Number":7777207236838910, "ab":"test", "$type":"test" } ], "TotalAmt":5.45, "SubAmount":4.95, "TaxAmount":0.5, "DiscountAmount":0.0, "Header":{ "ServiceType":null, "RequestDate":"2020-02-03T15:31:38.1260000Z", "$type":"Header" }, "Preparation":"ConsOutOfStore", "Details":{ "Discounts":[ ], "Items":[ { "Qty":1.0, "Sku":null, "Price":4.45, "Discounts":[ ], "OverrideDescription":null, "OverridePrice":null, "Suffix":null, "ChildItems":[ { "Qty":1.0, "Sku":null, "Price":0.0, "Discounts":null, "IsRefunded":false, "IsTaxed":false, "Summary":{ "TotalPrice":4.95, "DiscountAmount":0, "SubtotalAmount":4.95, "$type":"testSummary" }, "$type":"testItem" } ], "Taxes":[ { "Name":"Sales Tax", "Amount":50, "$type":"testTax" } ], "ReceiptLines":[ ], "Delivery":null, "$type":"testDetails" }, "$type":"trans" }, "RequestId":"test", "MessageId":"test", "$type":"testTransaction" }
I have a unique situation where some of my users have a slightly different objectClass than usual and I'm trying to find a way to mask that so the default searches in the MS AD Objects app work prope... See more...
I have a unique situation where some of my users have a slightly different objectClass than usual and I'm trying to find a way to mask that so the default searches in the MS AD Objects app work properly Basically the users are being parsed as objectClass="top|otherClass|person|organizationalPerson|user" I want to selectively remote otherClass using a transform or props stanza but i'm unable to do so. I've tried the following on the indexer in the windows TA application: transforms.conf: [msad_fix_objectClass] SOURCE_KEY = _raw REGEX = (?ms).objectClass=(top|)(?:otherClass|)(person|organizationalPerson|user). FORMAT = objectClass::"$1$2" props.conf [ActiveDirectory] TRANSFORMS-objectClass = msad_fix_objectClass But it's not working properly. Anyone have ideas?