All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

So I'm trying to enrich one search, by pulling fields from another index, they have a matching pair of fields Serialnumber & SERIALNUM. How would I do this?
Hi there. I trying to configure Splunk to receiving data from TCP port 514. I using default Splunk certificates witch are generated in /opt/splunk/etc/auth I configured inputs.conf : [tc... See more...
Hi there. I trying to configure Splunk to receiving data from TCP port 514. I using default Splunk certificates witch are generated in /opt/splunk/etc/auth I configured inputs.conf : [tcp-ssl:514] sourcetype = syslog [SSL] rootCA = /opt/splunk/etc/auth/cacert.pem serverCert = /opt/splunk/etc/auth/server.pem On my network device I configured to send syslog to my Splunk server address via Tcp port 514 and import cacert.pem After that i can't explore logs via this device but logos are hashed. What I am doing wrong?
outputs.conf [syslog:syslogGroup] server = x.x.x.x:514 props.conf [helloworld] TRANSFORMS-rsyslog = syslogRouting transforms.conf [syslogRouting] REGEX = . DEST_KEY = _SYSLOG_R... See more...
outputs.conf [syslog:syslogGroup] server = x.x.x.x:514 props.conf [helloworld] TRANSFORMS-rsyslog = syslogRouting transforms.conf [syslogRouting] REGEX = . DEST_KEY = _SYSLOG_ROUTING FORMAT = syslogGroup This config is applied on an indexer (many tutorials use a heavy forwarder which by defaults does not index data). This works perfectly in forwarding rawdata in syslog to another system however rawdata is also being indexed. Is there a way to prevent indexing from happening? I've tried adding a nullQueue stanza to props.conf without luck.
Hi Team, We have a kvstore with about ~95 million events dating back to 3 years. Key of the kvstore is unique numeric field. We also have timestamp among other fields. We have requirement to... See more...
Hi Team, We have a kvstore with about ~95 million events dating back to 3 years. Key of the kvstore is unique numeric field. We also have timestamp among other fields. We have requirement to only retain only 1 year worth of data. I would like to know what would be the best way to get rid of old data. Also is there a way to specify to drop any data which is older than 1 year going forward like index retention time. We have clustered sh and indexer environment. We are at Splunk version 6.11 Thank you!
Hi All, We are looking to monitor the SSRS Web/App servers via AppDynamics. The SSRS Web/AppServers   host the SSRS engine. The SSRS engine is .NET and in AppD we already see via one of our linked a... See more...
Hi All, We are looking to monitor the SSRS Web/App servers via AppDynamics. The SSRS Web/AppServers   host the SSRS engine. The SSRS engine is .NET and in AppD we already see via one of our linked application  that it is tracking SSRS web service calls. Since there is no AppD agent on the SSRS servers we can’t determine if the SSRS web services are taking more time when it hits the SSRS DB or waiting on some other process on the SSRS engine. Is there a way we can add standalone services into the config file of that linked application and It would start monitoring the SSRS engine. or Will have separately install agent in our SSRS servers . Regards Dorothy
Hi All, Need help! My splunk - remedy integration is working fine and i am able to create a new incident via search command. When i create a alert with schedule search and alert fires it create... See more...
Hi All, Need help! My splunk - remedy integration is working fine and i am able to create a new incident via search command. When i create a alert with schedule search and alert fires it create a new incident(in remedy) but after first alert all other alerts which get trigger from same search are updated into same incident rather than creating a new incident. What we want it when out schedule search run(if it's condition matches) it create a new incident rather than keep updating the old incident. What i am missing ? need help
we are using Splunk Cloud i want to modify from address(Splunk Cloud alerts@splunkcloud.com ) and want to use custom email when an alert email is generated
hi i plot a graph in the dashboard, the x axis is series from 1 to 2001 i want to replace 1-2001 to 500-3000 (yes, the step is >1 because the array is 2000 steps) the 500-3000 is only for the... See more...
hi i plot a graph in the dashboard, the x axis is series from 1 to 2001 i want to replace 1-2001 to 500-3000 (yes, the step is >1 because the array is 2000 steps) the 500-3000 is only for the presentation in the dashboard if its help. please see the picture and the code source="tcp:514" | streamstats values(_raw) as value | makemv value | mvexpand value | search value<0 | rename _time AS series | fields - _time | streamstats count AS series | eval series=printf("%05d",series) | eval series1=case( series>=0 AND series<130,"Anomaly", series>=131 AND series<250,"Cell_3G", series>=250 AND series<999,"Anomaly", series>=1000 AND series<1100,"Cell_4G", series>=1101 AND series<1499,"Anomaly", series>=1550 AND series<1650,"WIFI", series>=1651 AND series<2001,"Anomaly") | xyseries series series1 value| head 2001
Hi all, We have a machine agent  installed on a Linux box with SIM enabled and we see the memory usage is constantly shown as 5% though it varies when checked on the server manually, PFB. PS: All o... See more...
Hi all, We have a machine agent  installed on a Linux box with SIM enabled and we see the memory usage is constantly shown as 5% though it varies when checked on the server manually, PFB. PS: All other data such as diskspace, CPU and network are reporting with correct values and the issue is only with the memory usage. free -m              total       used       free     shared    buffers     cached Mem:         64305      20584      43721      10251       1967      16631 -/+ buffers/cache:       1985      62320 Swap:         4095          0       4095 free -g              total       used       free     shared    buffers     cached Mem:            62         20         42         10          1         16 -/+ buffers/cache:          1         60 Swap:            3          0          3 Any help is much appreciated. Thanks in advance.
splunk failed to add license to stack enterprise, err - stack already has this license, cannot add again OS: Windows Server 2012 R2 Splunk Enterprise 7.3.3 Anyone know how to solve it? Thanks!
my search query is this: DESCRIPTION="sump pump" OR (DESCRIPTION="ejector pump" AND DESCRIPTION="run/stop") | rex field=CREATEDATETIME "2019+ (?[^,]+)" | rex field=CREATEDATETIME "(?[^\s]+)" | rex... See more...
my search query is this: DESCRIPTION="sump pump" OR (DESCRIPTION="ejector pump" AND DESCRIPTION="run/stop") | rex field=CREATEDATETIME "2019+ (?[^,]+)" | rex field=CREATEDATETIME "(?[^\s]+)" | rex field=TIMEONLY "(?.):(?.):(?.)\s(?.)" | eval TIMEONLY = Hour*3600 + Minute*60 + Second| eval AM=case(AM="AM","0",AM="PM","43200")|eval TIMEONLY=TIMEONLY+AM| sort by !TIMEONLY |transaction DESCRIPTION startswith=VALUE="RUN" endswith=VALUE="STOP" result i get from search: i have created a field for the TIMEONLY , i am stuck with getting the duration of the time between the run and stop time, what can i do such that i am able to subtract my run and stop time to get the active time duration .
Hi,  We have installed the .net application agent on the server as per the getting started wizard utility instructions. It shows agent connected but never came out of "waiting for the data" step.... See more...
Hi,  We have installed the .net application agent on the server as per the getting started wizard utility instructions. It shows agent connected but never came out of "waiting for the data" step.  Steps We followed: 1. unzipped the dotNetAgentSetup64-4.5.1.2672 on server.  2. Executed installer.bat with admin access.  3. Restarted appdynamics- coordinator service, IIS service 4. Started load test on the server to have huge traffic on the application.  But the saas controller never showed the traffic on the dashboard.  In agent log file below is the only warning logged: "2020-02-25 14:45:20.0456 7768 AppDynamics.Coordinator 1 9 Warn RegistrationChannel Auto agent registration FAILED! Check the controller log for details." Can one of you please help me to resolve this- it's very urgent. please.  
As of now we are monitoring 6 nodes of BDA using Splunk ( scripted Python script for monitoring nodes and services using CDH API). It has been planned to increase the nodes from 6 to 12. Is our... See more...
As of now we are monitoring 6 nodes of BDA using Splunk ( scripted Python script for monitoring nodes and services using CDH API). It has been planned to increase the nodes from 6 to 12. Is our current dashboard going to be impacted with increase in node .Please suggest. Thanks
Hello, I am using the Splunk Web Framework TableView Component on a custom dashboard. I have enabled the "wrap" property on the component on my page. In Firefox, this gives me the intended behavio... See more...
Hello, I am using the Splunk Web Framework TableView Component on a custom dashboard. I have enabled the "wrap" property on the component on my page. In Firefox, this gives me the intended behavior at standard zoom. Text-Wrapping occurs vertically so that my TableView results fit horizontally on the page without any need for a horizontal scrollbar. However, the default zoom level in Chrome does not share this behavior. Splunk does wrap some of the text but not sufficiently to fit everything to the width of the page. As a result, the table results get a horizontal scrollbar. I see similar behavior in Firefox if I increase the zoom level above 100%. If I scale down to 90% in Chrome, I get the expected behavior. Because the scrollbar appears at the bottom of the control, which is often off the page, it may not always be clear to the user that there is more data overflowing horizontally they are not seeing because of the overflow behavior. Is there any way to ensure that the TableView component will wrap text vertically so the results will fit the width of the page without any need for a horizontal scrollbar? Thank you very much for any help you can provide.
Just looking for the best practice solution to the below problem. I'm pretty new to Splunk, so I feel the answer might be quite simple. The problem: Currently, a million logs come into a locatio... See more...
Just looking for the best practice solution to the below problem. I'm pretty new to Splunk, so I feel the answer might be quite simple. The problem: Currently, a million logs come into a location daily. At the end of every month, these logs are indexed, and a report based on search results is created. Since thirty million logs are all being processed in a block, it takes a lot of time to index them - and an even longer time to search. The fix: A single search runs over the course of the month, indexing new logs as they arrive, searching them, and appending all results in one large XML, or CSV, or similar. The implementations: • Set an alert that triggers on detecting new files to be indexed. I.e: if not already indexed, index them and immediately run the search on these new files, then append resulting search data to file. • Run tscollect daily on data that is already not indexed in a .tsidx to collect a relevant subset of data from raw, then process it in a block to create a report at end-of-month using the quicker tstats . • Simply set a scheduled search (searching last 24h) to run daily after the logs are indexed, appending results to file. Thanks for the help!
We are trying to set up email alerts. We cannot send directly to the internal exchange system. How can I set up splunk to send the emails to a postfix SMTP relay ?
Is it possible to monitor whether processes are running using metrics data and SAI ? I want to push out a config via UF to say monitor these X processes - and alert should any of them stop ? gr... See more...
Is it possible to monitor whether processes are running using metrics data and SAI ? I want to push out a config via UF to say monitor these X processes - and alert should any of them stop ? gratzi ... also - i have a single UF reporting to Splunk - my SAI "Overview" dashboard - looks like this (last 15 mins) UPTIME(h:m:s) top 10547 root 0 0.2 00: 00: 00 top 10817 root 0 0.2 00: 00: 00 top 11789 root 0 0.2 00: 00: 00 top 13036 root 0 0.2 00: 00: 00 top 14295 root 0 0.2 00: 00: 00 top 17779 root 0 0.1 4+18: 58: 48 What is this telling me - i only have one instance of top running - which shows as 4days+ - where are all the other pids coming from? ps -ef | grep -i top root 17779 24995 0 Feb21 pts/1 00:03:02 top root 30528 26798 0 12:27 pts/0 00:00:00 grep --color=auto -i top #
I've seen it suggested before and definitely have witnessed myself that for searches involving any significant amount of data, it's always light years faster to grab all the data and then figure out ... See more...
I've seen it suggested before and definitely have witnessed myself that for searches involving any significant amount of data, it's always light years faster to grab all the data and then figure out a way to correlate it at a later time via stats, versus using a subsearch in your base query. To illustrate what I mean, say for example you have two sourcetypes "left" and "right", each containing their own set of data that has a shared unique identifier that can correlate the data we'll call "unique_id". So why does a search like this: index=left sourcetype=left [search index=right sourcetype=right | stats count by unique_id| fields unique_id] | stats count Take massively longer (and in a lot of cases just timeout indefinitely due to memory limits of a subsearch being exceeded) than something like this: (index=left sourcetype=left) OR (index=right sourcetype=right) | eval left_count=if(sourcetype=left,1,0) | stats values(sourcetype) as sourcetypes, sum(left_count) as left_count by unique_id | search sourcetypes=*left* sourcetypes=*right* | stats sum(left_count) as count I'm wondering why subsearch is always so much slower for something like this?
The "Start Trial" button does nothing in the triage page. Tried two accounts. Any idea why?
Hello, I have the following table: column1 column2 Andrew Andrew George George Paris Berlin I would like to get as output the following: column 1 column2 Paris ... See more...
Hello, I have the following table: column1 column2 Andrew Andrew George George Paris Berlin I would like to get as output the following: column 1 column2 Paris Berlin Tables come from the use of the | table command | table column1,column2 Is there any way this can be done? I tried | table column1, column2 | where NOT match(column1,column2) but no results are found