All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

If so, can this be turned off somewhere?  I'm using ingress for kubernetes and all it wants is a FQDN, no need to specify the port.  But if I use https://mydomain.com, the client never phones home.  ... See more...
If so, can this be turned off somewhere?  I'm using ingress for kubernetes and all it wants is a FQDN, no need to specify the port.  But if I use https://mydomain.com, the client never phones home.  Meanwhile, in K8s, if I try to use https://mydoman.com:8089, it just won't work. I'll go ahead and copy/paste the stanza from splunk docs here:   [target-broker:deploymentServer] targetUri= <uri> * URI of the deployment server. * An example of <uri>: <scheme>://<deploymentServer>:<mgmtPort> # I don't need mgmtPort connect_timeout = <positive integer> * See 'connect_timeout' in the "[deployment-client]" stanza for information on this setting. send_timeout = <positive integer> * See 'send_timeout' in the "[deployment-client]" stanza for information on this setting. recv_timeout = <positive integer> * See 'recv_timeout' in the "[deployment-client]" stanza for information on this setting.  
  In table shown above .. The highlighted column name 'changequantity ' , on clicking on this column name it should open a link to url in separate window .
Question : A Table with 4 columns: eg: A B C D  1 3  1 2 2  4  1 6   where A , B , C ,D are column names . How on click on column name a link to separate page is opened . which means on click... See more...
Question : A Table with 4 columns: eg: A B C D  1 3  1 2 2  4  1 6   where A , B , C ,D are column names . How on click on column name a link to separate page is opened . which means on clicking A or B it should open a link to another page .
you are to create a dashboard that tracks log feeds ​so I imagine it would look like a table and have things like log feed  |  last seen and it would be colored based on some threshhold (last see... See more...
you are to create a dashboard that tracks log feeds ​so I imagine it would look like a table and have things like log feed  |  last seen and it would be colored based on some threshhold (last seen 24 hours red, last seen 10 mins green). It will include: color for categorizing critical levels email alerting can start with small features
Hi, We are using Amazon Linux Workspaces and we incorporated Splunk in the master image to deploy multiple workspaces from the master image. We have followed the directions in the http://docs.splunk... See more...
Hi, We are using Amazon Linux Workspaces and we incorporated Splunk in the master image to deploy multiple workspaces from the master image. We have followed the directions in the http://docs.splunk.com/Documentation/Splunk/6.3.1/Forwarding/Makeadfpartofasystemimage doc and it works. however, the cloned images are not reporting to splunk when we go and look at the cloned images' server.conf and inputs.conf file it contains an entry for host name which is different from its host itself. But its not at all reporting to splunk. (but splunk service is running on the newly cloned hosts). Basically we want to incorporate splunk with Master image itself and deploy it to all.  Thank you, Senthil
Hello! How can I add Office 365 logs to my Splunk if I have 1 search head and 2 indexers and using distributed search? Should I install all add-ons on 1 indexer and make all configurations on it an... See more...
Hello! How can I add Office 365 logs to my Splunk if I have 1 search head and 2 indexers and using distributed search? Should I install all add-ons on 1 indexer and make all configurations on it and all add-ons and app on search head?
Hi Team,  Any suggestions or ideas for migrating from a non clustered indexers to Clustered environment.Currently we have 10indexers with clustered search head and planning to migrate it to Indexer ... See more...
Hi Team,  Any suggestions or ideas for migrating from a non clustered indexers to Clustered environment.Currently we have 10indexers with clustered search head and planning to migrate it to Indexer cluster.What is the process or key steps need to be considered.Thanks
Hi, I am trying to redirect the logs generated by my java project to splunk. I am using below appenders to redirect the logs and created a HTTP Event token for the same. I am able to receive the sim... See more...
Hi, I am trying to redirect the logs generated by my java project to splunk. I am using below appenders to redirect the logs and created a HTTP Event token for the same. I am able to receive the simple message sent using curl as in splunk documentation for HEC. But not able to recieve the logs in splunk.  appender.mycomp.type = http appender.mycomp.name = mycomp appender.mycomp.url = http://localhost:8088/services/collector appender.mycomp.token = 9548e361-xxxx-xxxx-xxxx-xxxxxxxxxxx appender.mycomp.layout.type = PatternLayout appender.mycomp.layout.pattern = %d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n Any other configuration required to receive the logs in Splunk 8.x . Please help Thanks in advance
I have a table generating fields Assignee, Support_tier, HR_Country, Hostdomain, I have to Assign some values to 'Assignee' based on some values in other fields Eg:  if Support_tier is 901 and HR_C... See more...
I have a table generating fields Assignee, Support_tier, HR_Country, Hostdomain, I have to Assign some values to 'Assignee' based on some values in other fields Eg:  if Support_tier is 901 and HR_Country is Canada the Assignee value should be 'priya'                                              also if Support_tier is 908 & 909 Assignee should be 'udit'   Kindly help with the query?? 
Hey, i am trying to write a NodeJS application using Splunks JavaScript SDK and i have a requirement to do some reading/writing to the KVStore, i would prefer not to use SPL (inputlookup/outputlook) ... See more...
Hey, i am trying to write a NodeJS application using Splunks JavaScript SDK and i have a requirement to do some reading/writing to the KVStore, i would prefer not to use SPL (inputlookup/outputlook) and do it using the REST API (KVStore endpoints) but after extensive digging through the splunk-sdk documentation, guides and examples i cannot find anything in the JavaScript Splunk SDK related to the KVStore endpoints. My initial idea was to extend the splunkjs.Service.Entity and/or splunkjs.Service.Collection but then i thought maybe im missing something and there is a reason this wasn't implemented in the first place or maybe its implemented and somehow i missed it in the documentation. Can someone please point me in the right direction? Should i stick to my initial idea and extend the entity and collection classes or is there some other way without creating jobs and searches?
We're working with a 30 day trial as we wait for procurement to purchase a full license. While learning and configuring the system, I'm working on getting rid of some events we don't want to see. So ... See more...
We're working with a 30 day trial as we wait for procurement to purchase a full license. While learning and configuring the system, I'm working on getting rid of some events we don't want to see. So far it's not working. My first question is, will this work on a Trial license? If it will, here's what my files looks like. I've tried as many combinations and formats as I can find examples for. I had the Transforms settings all in one stanza to start with, below is my latest attempt. If I run "splunk btool check", I see no errors. Help! Please .   props.conf: [WinEventLog:Security] TRANSFORMS-security = setnull0 [WMI:WinEventLog:System] TRANSFORMS-wmisystem = setnull1 [WinEventLog:System] TRANSFORMS-system = setnull2   transforms.conf: [setnull0] SOURCE_KEY = dest REGEX = ^EventCode=(1107|4688|7036|10028)\D DEST_KEY = queue FORMAT = nullQueue [setnull1] SOURCE_KEY = dest REGEX = ^EventCode=(1107|4688|7036|10028)\D DEST_KEY = queue FORMAT = nullQueue [setnull2] SOURCE_KEY = dest REGEX = ^EventCode=(1107|4688|7036|10028)\D DEST_KEY = queue FORMAT = nullQueue  
We have charts/line graphs on our reports and want to send them to email as pdf. but splunk does not seem to support pdf-delivery on form dashboards. Can you help on alternative for this OR if this... See more...
We have charts/line graphs on our reports and want to send them to email as pdf. but splunk does not seem to support pdf-delivery on form dashboards. Can you help on alternative for this OR if this is going to be supported in newer versions. We are running 726 version. I also happened to see "report sender" app on splunkbase, but it was third party. Is there anything out of the box from splunk OR supported app by splunk, so that we can send the trellis charts as PDF emails..
Hi community! I think we need an Add on to integrate splunk 8 to Netapp. The Add on I find are for older versions of Splunk.  What I understand, the Netapp is a closed box and we don't know whether t... See more...
Hi community! I think we need an Add on to integrate splunk 8 to Netapp. The Add on I find are for older versions of Splunk.  What I understand, the Netapp is a closed box and we don't know whether to send the logs from there or get them from a syslog. I don't know what would be better practice...
Hi Everyone, I have one panel TimeOut . I have set the Trend Indicator in it to compare the count by percentage. The problem I am facing is its always taking the last 2 days value and showing the p... See more...
Hi Everyone, I have one panel TimeOut . I have set the Trend Indicator in it to compare the count by percentage. The problem I am facing is its always taking the last 2 days value and showing the percentage increase/decrease by comparing last two days value only. I want if I select last 7 days from the date drop down that is september 10 to september 16 .I want it should should show the percentage increase/decrease of september 10 and september 16. (First and last value) But its showing percentage difference of september 15 and september 16. what ever date Range I am selecting its always showing of last two days percentage increase/decrease  only. I want the percentage difference of 1st value and last value. Can someone guide me where I am wrong. Below is my XML Code. <row> <panel> <single> <title>TIMEOUT</title> <search> <query>index="ABC" sourcetype=XYZTimeout $OrgName$ | bin span=1d _time |stats count by _time</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="colorBy">value</option> <option name="drilldown">all</option> <option name="height">100</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0,10,25,40]</option> <option name="trendDisplayMode">percent</option> <option name="unit"></option> <option name="rangeColors">["0xFF0000","0xFF0000","0xFF0000","0xFF0000","0xFF0000"]</option> <option name="useColors">1</option> <option name="showSparkline">1</option> <option name="trendDisplayMode">percent</option> <drilldown> <set token="show_panel3">true</set> <set token="selected_value3">$click.value$</set> </drilldown> </single> </panel>
Hello Everyone, I have integrated the "MS Teams alert for Splunk" add-on in my splunk cluster. I have added it in to alert action and triggering alerts n number of times. Example: The alert has 5... See more...
Hello Everyone, I have integrated the "MS Teams alert for Splunk" add-on in my splunk cluster. I have added it in to alert action and triggering alerts n number of times. Example: The alert has 5 result rows and I am getting 5 messages in MS teams however I need only one alert per trigger. I checked the alert configuration and the trigger action is set to as Once. In the same alert I have configured to send to my email and I am getting only once but in MS team I am getting 5 times. Thanks in advance!
i am seeking a way to define a variable where i can define a static list of hosts to (re-)use in adhoc searches Example instead of doing this everytime           index=os host=hosta OR host=hostb O... See more...
i am seeking a way to define a variable where i can define a static list of hosts to (re-)use in adhoc searches Example instead of doing this everytime           index=os host=hosta OR host=hostb OR host=hostc ....host=hostnn Instead do something like this            index=os host=$MY_HOSTLIST_VAR I've been trying to do something with Lookup csv file which I uploaded, but can't seem to get that syntax correct.
Hi Friends, If I execute below highlighted query I am getting the result where when I supply the result as search it is not returning any result index=* env=X1 SourceName=*api* [search index=* env=... See more...
Hi Friends, If I execute below highlighted query I am getting the result where when I supply the result as search it is not returning any result index=* env=X1 SourceName=*api* [search index=* env=X1 SourceName=*api* "Transaction" | eval "TraceID"=substr(Message,85,36) | table "TraceID"] Please help on this. Thanks
Hello, We have an alert in place that uses the REST API to determine when a server is using to much memory and then the server is restarted.  It had been working great, however last week we had an ... See more...
Hello, We have an alert in place that uses the REST API to determine when a server is using to much memory and then the server is restarted.  It had been working great, however last week we had an alert come through that listed every box connected to the DMC.  This caused some restarts that created an issue.  When trying to look at the stats for the machines at the time of the alert, none of the servers show meeting that condition.  Am I doing something wrong here? We are using the below search: | rest splunk_server_group=dmc_group_* /services/server/status/resource-usage/hostwide | eval percentage=round(mem_used/mem,3)*100 | where percentage > 90 | fields splunk_server, percentage, mem_used, mem | rename splunk_server AS ServerName, mem AS "Physical memory installed (MB)", percentage AS "Memory used (%)", mem_used AS "Memory used (MB)" | rex field=ServerName "\s*(?<ServerName>\w+[\d+]).*" | table ServerName | sort - ServerName | stats list(ServerName) as ServerName delim="," | nomv ServerName
I use the following query source="/opt/apps/spring-boot/abc/log/communication.log" | rex "\"correlation\" : \"(?P<correlation>.*)\"" | transaction correlation To be able to find request/respo... See more...
I use the following query source="/opt/apps/spring-boot/abc/log/communication.log" | rex "\"correlation\" : \"(?P<correlation>.*)\"" | transaction correlation To be able to find request/response entries, to then show statistic of failure rate for request with specific integrations. However, it seems that the transaction event only show hits for the first 52minutes in this case, when searching for the last 4 hours So I thought that perhaps there was some logging issue, resulting in no entries, so I searched last 60 minutes (which should have returned 0 hits logically)   But as you can see, I got answers now for the first 57 minutes....  I thought that it must mean that the correlation at some point found a duplicate, so i searched for a correlation that occured 15 minutes ago, but it was the only one, so I do not understand why it did not show up in the original 4 hour search. Any help is greatly appreciated!   //Jonathan
Hi, we lost hours trying to pull certificate with a generic error:     2020-09-17 08:09:43,692 +0000 log_level=INFO, pid=24567, tid=MainThread, file=error_ctl.py, func_name=ctl, code_line_no=147 |... See more...
Hi, we lost hours trying to pull certificate with a generic error:     2020-09-17 08:09:43,692 +0000 log_level=INFO, pid=24567, tid=MainThread, file=error_ctl.py, func_name=ctl, code_line_no=147 | REST ERROR[400]: Bad Request - Failed to fetch the certificate from server File "/opt/splunk/bin/runScript.py", line 78, in <module> execfile(REAL_SCRIPT_NAME) File "/opt/splunk/etc/apps/Splunk_TA_checkpoint-opseclea/bin/ta_opseclea_rh_cert.py", line 348, in <module> admin.CONTEXT_APP_AND_USER File "/opt/splunk/lib/python2.7/site-packages/splunk/admin.py", line 131, in init hand.execute(info) File "/opt/splunk/lib/python2.7/site-packages/splunk/admin.py", line 594, in execute if self.requestedAction == ACTION_CREATE: self.handleCreate(confInfo) File "/opt/splunk/etc/apps/Splunk_TA_checkpoint-opseclea/bin/ta_opseclea_rh_cert.py", line 289, in handleCreate RH_Err.ctl(400, msgx=exc, logLevel=logging.INFO) File "/opt/splunk/etc/apps/Splunk_TA_checkpoint-opseclea/bin/splunk_ta_checkpoint_opseclea/splunktaucclib/rest_handler/error_ctl.py", line 144, in ctl if logLevel >= logging.ERROR or isinstance(msgx, Exception) \ Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_checkpoint-opseclea/bin/ta_opseclea_rh_cert.py", line 279, in handleCreate args = self.pull_cert(args) File "/opt/splunk/etc/apps/Splunk_TA_checkpoint-opseclea/bin/ta_opseclea_rh_cert.py", line 228, in pull_cert opsec_sic_name, cert_name = cert.pull() File "/opt/splunk/etc/apps/Splunk_TA_checkpoint-opseclea/bin/ta_opseclea_rh_cert.py", line 133, in pull raise CertException("Failed to fetch the certificate from server") CertException: Failed to fetch the certificate from server     Then we download the opsec sic utilities version 6.1 for Linux 30 from this link https://supportcenter.checkpoint.com/supportcenter/portal?ventSubmit_doGoviewsolutiondetails=&solutionid=sk63026 and everithing went good. Please review the app on Splunkbase. Thanks and regards