All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Want to get the external linked incident number in my email trigger of Correlation Search I have linked an incident ticket of snow to my episode and now i want to send this incident information in ... See more...
Want to get the external linked incident number in my email trigger of Correlation Search I have linked an incident ticket of snow to my episode and now i want to send this incident information in the email. Any help would be highly appreciated.
Hi Team, I have verified the props.conf and access controls---users---timezone for my user all kept correctly . Actually for my user the Time zone showing EST but as a admin if try to open all the ... See more...
Hi Team, I have verified the props.conf and access controls---users---timezone for my user all kept correctly . Actually for my user the Time zone showing EST but as a admin if try to open all the scheduled searches about the user i can see CEST. i am not sure , why it's showing for the user differently. any guess why it's showing different for user how to resolve it
Hello all, I have one business transaction registered as Web services ("entry point type") and using .Net application. In that BT I am trying to configure one more "POCO custom BT match rule" base... See more...
Hello all, I have one business transaction registered as Web services ("entry point type") and using .Net application. In that BT I am trying to configure one more "POCO custom BT match rule" based on a specific class and method. I am doing it from the call graph in the BT Snapshot. But the newly configured BT is not getting registered and not reflecting separately under Business Transaction Tab in the controller. Though I have set up a higher priority. It is showing as a "Custom entry point" in the call graph but not reflecting in the BT tab. Please help me out, how can I achieve this. I have tried configuring SEP as well for that specific class and method but no luck. N.B: I am using an on-prem controller and I am not recycling the app pools after this configuration. Thanks
Hello, I need to delete the numericals values in variables name : CETAT_UGE_11 become CETAT_UGE knowing that I have many variables like this, so i need a solution that works for all of them. ... See more...
Hello, I need to delete the numericals values in variables name : CETAT_UGE_11 become CETAT_UGE knowing that I have many variables like this, so i need a solution that works for all of them. I started doing this : | rex result = column "([0-9]{2-3})" , with column is a column of a table which contains CETAT_UGE_11 and others. but it does not work : Error in 'rex' command: The regex 'hfield' does not extract anything. It should specify at least one named group. Format: (?...). Thank you for helping me
Hi all, I installed Splunk universal forwarder on windows 2016. Is it possible to get a log of how much Splunkd of the universal forwarder uses windows memory and CPU? [perfmon: // CPU load]... See more...
Hi all, I installed Splunk universal forwarder on windows 2016. Is it possible to get a log of how much Splunkd of the universal forwarder uses windows memory and CPU? [perfmon: // CPU load] [perfmon: // Available memory] I can only find about this. Thank you for helping me.
{"device":"abcd","host":"1.2.3.4"} {"device":"efgh [ = ILO = ]","host":"2.3.4.5"} {"device":"qrst - [ab cd ef]","host":"4.5.6.7"} My data looks like this in the _raw. now i am trying to fet... See more...
{"device":"abcd","host":"1.2.3.4"} {"device":"efgh [ = ILO = ]","host":"2.3.4.5"} {"device":"qrst - [ab cd ef]","host":"4.5.6.7"} My data looks like this in the _raw. now i am trying to fetch device and host using regex. please help. It should be simple but since host is a default keyword the host from which data is coming is also coming with field name host so i need to rename host coming in _raw to ip
I created and add-on using Splunk add-on builder and also an app related to the add-on. Now I need to package both app and add-on together and delpoy on to the another splunk instance. Please help me... See more...
I created and add-on using Splunk add-on builder and also an app related to the add-on. Now I need to package both app and add-on together and delpoy on to the another splunk instance. Please help me out on how to create this .tgz file which includes both app and add-on.
Hi All, I'm fairly new to Splunk. I'm trying to save some time with an automated report on IIS Time Taken. I need to report on 3 metrics, % of time below 2ms, 4ms and any above 4ms and then outp... See more...
Hi All, I'm fairly new to Splunk. I'm trying to save some time with an automated report on IIS Time Taken. I need to report on 3 metrics, % of time below 2ms, 4ms and any above 4ms and then output the searches to a single csv. I've probably gone about this the wrong way but so far I've created 3 separate Searches to 3 different reports. What I'm trying to achieve is for the previous month's logs, Table the above 3 metrics by day, then add days together for a total, working out the % of each metric. So far this is what I've got which achieves my objective for the Metrics by Day and adds them together I'm not sure how to string them together and wonder if anyone could give me any pointers? host="" sourcetype=iis AND NOT (".jpg" OR "*.txt") | eval time_taken = time_taken/1000 | search time_taken< "2" | stats count by date | rename count as "Below 2ms" I've done one search for each of the metrics I'm trying to report on but assume I'm going about it the wrong way.
Is this app supported on Splunk Cloud. I have uploaded it but it has been rejected. Regards Fabrizio Zavalloni
While running AppInspect(version 2.1.0) against splunk addon getting an error "check_rest_handler_python_executable_exists - Check that python version is python3 for executables in restmap.conf." ev... See more...
While running AppInspect(version 2.1.0) against splunk addon getting an error "check_rest_handler_python_executable_exists - Check that python version is python3 for executables in restmap.conf." even I updated it to python3 but still the same error is there however if I am running with old appinspect(version 1.7) it passes splunk app-inspect. Kindly let me know what I am missing here. Thanks.
Hi Team, I am trying to register BTs as first-class business transactions. However, it's failing and gives the message "failure registering business transaction or it is already registered."  PFA s... See more...
Hi Team, I am trying to register BTs as first-class business transactions. However, it's failing and gives the message "failure registering business transaction or it is already registered."  PFA snap. Below mentioned items are already done 1. Business transaction lockdown is disabled 2. Increased BT limit to 70 from default 50 3. Excluded unwanted BTs Will the transaction again be detected as first-class transaction post-application of load? Kindly help with the solution.
I want to use a object storage(not AWS,but use S3) as splunk smartstore。So I use S3 compatibility tool offerd by splunk to test my object storage(https://github.com/splunk/s3-tests/)。After running th... See more...
I want to use a object storage(not AWS,but use S3) as splunk smartstore。So I use S3 compatibility tool offerd by splunk to test my object storage(https://github.com/splunk/s3-tests/)。After running the tool,I face some errors that i cat not solve。My OS is centos 7.4。 please help me,Thanks。
I have json logs that I want to extract.I did All items related to field extraction in props.conf file. my log {"export_time":"06:45:53","flows":[{"applicationNamePath":"XXX","applicationName":"t... See more...
I have json logs that I want to extract.I did All items related to field extraction in props.conf file. my log {"export_time":"06:45:53","flows":[{"applicationNamePath":"XXX","applicationName":"tcp","flowStartSeconds":"1589957129","sourceTransportPort":"XXX","sourceIPv4Address":"190.x.x.x","destinationIPv4Address":"X.x.x.x","flowId":"64414","flowDirection":"0","tunnelTechnology":"no","destinationTransportPort":"443","flowExpired":"1","detectionCompleted":"0","tcpControlBits":"14","flowDurationMilliseconds":"9000","octetTotalCount":"152","packetTotalCount":"3","applicationCategoryName":"Network Service","p2pTechnology":"no","attributes":[]}],"last":1} my props.conf: indexed_extraction = json
Hello One of our partners got a request from its customer who wishes to manage end-to-end access Ticket support for their customers who request it. Curious if anyone has worked through a similar ... See more...
Hello One of our partners got a request from its customer who wishes to manage end-to-end access Ticket support for their customers who request it. Curious if anyone has worked through a similar use case using Splunk. How can this be done? Can we associate a partner address to the customer's support? And thus see all the tickets created by I-Tracing for example? If anyone in the Splunk community has worked through a use case like this, I'd like to learn from what you can share. We'd like to help our partner collaborate efficiently with the customer.
Taking a cue from this thread, https://answers.splunk.com/answering/823859/view.html The code <html> <style> .input-dropdown { min-width: XXXpx !important; width: X... See more...
Taking a cue from this thread, https://answers.splunk.com/answering/823859/view.html The code <html> <style> .input-dropdown { min-width: XXXpx !important; width: XXXpx !important; max-width: XXXpx !important; } .splunk-dropdown .select2-container { min-width: XXXpx !important; width: XXXpx !important; max-width: XXXpx !important; } </style> </html> Works fine. But, now, how to take a single dropdown input, not all? Example a dropdown with id="single_dropdown"? <input type="dropdown" token="token_to_catch" id="single_dropdown"> I'd like to ask if and how it was possible to resize a single input object from the dashboard xml. Thanks.
Sorry for the silly attention-grabbing dancing question mark. Thanks for any help on this. I've had to dive into the deep end of Splunk with no previous exposure for various staffing reas... See more...
Sorry for the silly attention-grabbing dancing question mark. Thanks for any help on this. I've had to dive into the deep end of Splunk with no previous exposure for various staffing reasons, so please forgive my ignorance. I'm trying to get my use of the sort 0 command (to override the 10k scheduler output limitation. I'm using the cloud version so no control on the conf files on the server or anything like that. Where am I still missing a sort 0?!? I've got this, but it's still truncating all rows after 10,000 rows. Thing.process.valid.request OR herschel.update.job.completed | transaction activity_id startswith="Thing.process.valid.request" endswith="herschel.update.job.completed" | eval start_time=_time | eval end_time=_time+duration | convert timeformat="%Y-%m-%d %H:%M:%S" ctime(start_time) AS ThingPackager_Start | convert timeformat="%Y-%m-%d %H:%M:%S" ctime(end_time) AS Sent_To_Thing | eval delay_hours = round(duration/60/60, 2) | rename activity_id AS tar_name | eval media_assetID=substr(tar_name,1,12) | sort 0 end_time | stats first as *, first(_*) as _* by media_assetID | table Thing_Start Sent_To_Thing start_time end_time Thing_id tar_name media_assetID delay_hours | sort 0 by delay_hours desc Thank you for any help, I'm going bonkers trying to get my head around this syntax.
Hi, I´m trying to lookup different csv-files depending on an field-Value. But it seems to be a problem for the lookup command to handle an variable as lookup-csv-file. index=yxz | search mod... See more...
Hi, I´m trying to lookup different csv-files depending on an field-Value. But it seems to be a problem for the lookup command to handle an variable as lookup-csv-file. index=yxz | search model="123" | eval lookupfile=case(model == 123, "123_lookup.csv", model == 456, "456_lookup.csv", model == 789, "789_lookup.csv") | lookup lookupfile ErrorCode I´m getting this Error Message: Error in 'lookup' command: Could not construct lookup 'lookupfile, ErrorCode'. See search.log for more details. Can someone help me with this?
Hello All, Am trying to optimize the performance of a dashboard that was built some time back. The existing dashboard has been created by joining atleast 10 lookup files in the same query for a pa... See more...
Hello All, Am trying to optimize the performance of a dashboard that was built some time back. The existing dashboard has been created by joining atleast 10 lookup files in the same query for a panel. And over time the lookup has increased in size going more than 1,00,000 rows. This has caused a lot of problems in the join conditions made in the query. I would like to understand what is the search performance differnce when data is from a lookup or loaded from index. Does lookup command have limitations like the join command like, what is the max limit for a lookup command. Can it be used instead of join when data is from a lookup. Is there a difference in join limit set in limits.conf for data from Index and data from lookup. I have a scenario where the limits.conf default value for join and subquery has been increased and also am using max=0 in my join. But the results are not coming as expected. It works perfectly when i optimize the subquery having lookup to have less than 50,000 rows. Any thoughts or advise on this.
Hi Folks, I am getting an error message when trying to send alerts from Splunk to Slack. Here is an error message: sendmodalert - action=slack_webhook_alert - Alert action script returned er... See more...
Hi Folks, I am getting an error message when trying to send alerts from Splunk to Slack. Here is an error message: sendmodalert - action=slack_webhook_alert - Alert action script returned error code=255 ERROR SearchScheduler - Error in 'sendalert' command: Alert script returned error code 255., search='sendalert slack_webhook_alert results_file Any idea, what must be causing this issue?
how to use the liscense key for the snmp modular input , it is giving me an error other options to send snmp events and receive at splunk instance