All Topics

Top

All Topics

Hi Folks,    I want to check at what time url has been brought up. Url already added in website monitoring. For example if the url was down at 12 PM and it has been brought up at 1 AM this dashboar... See more...
Hi Folks,    I want to check at what time url has been brought up. Url already added in website monitoring. For example if the url was down at 12 PM and it has been brought up at 1 AM this dashboard panel should indicate 1 PM url went up. I want to monitor multiple urls for this scenario. Appreciate your expertise advise. 
Don't know why there is not a location for "SignalFx" related questions. According to SignalFlow API doc: https://dev.splunk.com/observability/reference/api/signalflow/latest#endpoint-start-signalfl... See more...
Don't know why there is not a location for "SignalFx" related questions. According to SignalFlow API doc: https://dev.splunk.com/observability/reference/api/signalflow/latest#endpoint-start-signalflow-computation, there is a "start" parameter and a "stop" parameter. And I'm totally confused by the explanation: "The date and time that the computation should start/stop, in *nix time" Let's say current time is "2021-09-22 20:27:00" (America/Los_Angeles) What if I chose a time range in the past: what will happen if the start time equals the stop time? Am I supposed to get nothing cause the computation stops immediately? And How to understand it that the computation should start/stop yesterday? start:  2021-09-21 00:00:00 stop:  2021-09-21 00:00:00 what will happen if the stop time is greater than the start time?  Am I supposed to wait for 2 hours for a computation start/stop yesterday? start:  2021-09-21 00:00:00 stop:  2021-09-22 02:00:00 And what if I chose a time range in the future? what will happen if the start time equals the stop time? Am I supposed to get nothing cause the computation stops immediately? And am I supposed to wait until "2021-09-23 00:00:00" for the computation to start? start:  2021-09-23 00:00:00 stop:  2021-09-23 00:00:00 what will happen if the stop time is greater than the start time?  Am I supposed to wait for 2 hours for a computation start/stop in the future? start:  2021-09-23 00:00:00 stop:  2021-09-23 02:00:00
Hi, This seems super dumb, but I've been fiddling with this for an embarrassingly long time now. It's been a couple of years since I've written any sub-searches. I'm attempting to project data ... See more...
Hi, This seems super dumb, but I've been fiddling with this for an embarrassingly long time now. It's been a couple of years since I've written any sub-searches. I'm attempting to project data from the subqueries into a summary table (all from the same root search results) This is running on splunk cloud under a trial license. See dumbed down queries belong. Happily returns a result:     index=xxx | search index=xxx admintom | stats count as x | table x | table x     Format returns nothing (`format` shows `NOT()`)     index=xxx [ search index=xxx admintom | stats count as x | table x ] | table x      
Hello Fabulous Splunk Community! You might have noticed something new in Community... options for personalizing your community profile! We're very excited to share that we're enhancing our communit... See more...
Hello Fabulous Splunk Community! You might have noticed something new in Community... options for personalizing your community profile! We're very excited to share that we're enhancing our community experience to pave the way for personalizing and customizing your experience throughout the site. Now... members will have the option to add "Personalization Information" on their profiles, and we've added some call-outs in the site's header and right-hand rail. The information you add here will include: Your responsibility or use of Splunk (Are you a Splunk Admin? A Power User? A Developer? etc.) Your practice areas (Do you work in IT? Security? Application Development? etc.) Products you use most (Splunk Enterprise, Splunk Cloud Platform, ES, etc.) Your industry  Additionally, our call-outs also prompt you to consider adding a Bio if you haven't already. What will we do with these data points? First, only your Bio will be part of your public profile. The personalization data points are private and will help us customize your experience, serve up content or areas that may be of interest, and invite you to special interest-driven areas as they are added to the platform. Adding these additional data points is, of course, completely optional and it's part of how we'd like to fulfill our promise to you all... to provide you with the connection, learning, and enjoyment you appreciate from our community.    What's next? The personalization data will help us understand where and how to customize the site and your ways of experiencing the site. In the next few months, we plan to design and launch a responsive experience and open up new practice-based discussion areas, so you can talk shop amongst community peers that understand you best. More on these next steps soon! Getting started with Personalization... We're taking some big bold steps, and we'll need your help. We'd love for you to check out the new features and share as much of your role, practice, and product use as you're comfortable with, and we've tried to make it pretty easy. You'll see some call-outs in the site header and right-hand rail... just click to jump to the new personalization section of your profile and tell us a little more about yourself. That's it! There might just be a little something special for the earliest of adopters. And if you've got some feedback, we're all ears. Sound off in the comments below, or shoot me a private message! Cheers, Bryan Jennewein, Sr. Director - Splunk Community Note: Does the site look... a little wonky to you? It happens sometimes with big deployments (like this one!), so if things "look weird" try clearing your cache and cookies, or just reach out!
Hey guys,    
I am using Splunk Add-on for Amazon Web Services to ingest json.gz files from an s3 bucket to Splunk. However Splunk is not unzipping the .gz file to parse the json content.  Is there something I sh... See more...
I am using Splunk Add-on for Amazon Web Services to ingest json.gz files from an s3 bucket to Splunk. However Splunk is not unzipping the .gz file to parse the json content.  Is there something I should do for the unzipping to happen?
I had a EC2 syslog client and a MacOS which installed the Splunk Enterprise. I want my Splunk Enterprise to be my syslog server. In that way, I should configure my syslog client to transfer syslog to... See more...
I had a EC2 syslog client and a MacOS which installed the Splunk Enterprise. I want my Splunk Enterprise to be my syslog server. In that way, I should configure my syslog client to transfer syslog to Splunk Enterprise and nothing syslog server configure stuff need I make. On Splunk server I created a UDP Data Input. I also exposed 514 port and specified an Index for this Data Input. I set SourceType as 'syslog'. On syslog client side, I configured the destination to be *.* <Splunk Enterprise IP>:514 in its rsyslog.conf file. I tried to use logger to generate syslog on my client side e.g. logger -p local0.crit "...", but there was no event showing up in my index when I did the search. Basically, in my understanding the Splunk Enterprise Server can function as a syslog server which can receive message from syslog clients.  (Screenshot is from:  https://www.youtube.com/watch?v=BQU-bsSCXhk) Is there any step I did incorrect or do I miss any step?     
Hello, I am having an issue where dashboard studio has ceased to export background images (loaded on to the canvas) with PDF and PNG download.  When I download reports (via the download link in the ... See more...
Hello, I am having an issue where dashboard studio has ceased to export background images (loaded on to the canvas) with PDF and PNG download.  When I download reports (via the download link in the upper right corner) in PDF/PNG format the canvas background image is gone and only the background color of the dashboard that is set exports out.  I am currently using a canvas background image and it is gone on the export of every dashboard -- this is quite problematic.  Does anyone have a solution on how to fix this? Thank you. Edit:  I figured this may be a KVStore linking issue, so I uploaded the image to the server and linked to it rather than uploading it.  The PDF export still does not contain the canvas background image. I also thought this may be a size limitation, however the file is around 1MB in size total.  This appears to not be the case also.
Hi there everyone, Today i'm trying to update my Enterprise Console to the latest version, right now i'm using the AppDynamics Version 20.7.0-22903. Downloaded the Enterprise Console 21.4.6 th... See more...
Hi there everyone, Today i'm trying to update my Enterprise Console to the latest version, right now i'm using the AppDynamics Version 20.7.0-22903. Downloaded the Enterprise Console 21.4.6 that is the latest version avaliable from my management interface Following all steps i got stuck in this action: The the database is running without errors, i can access the controller host database at the console host through bash without problems. Even with everything working everytime i try to update or execute the platform-setup-x64-linux-21.4.6.24635.sh script get this message. Is there something more that i can debug to find out what is going on? Thank you.
I am using splunk cloud 8.2 version and trying to collect logs from Domain controllers by installing UF's on them. I have Deployment server set up to manage those Uf's and here i have the doubt. In... See more...
I am using splunk cloud 8.2 version and trying to collect logs from Domain controllers by installing UF's on them. I have Deployment server set up to manage those Uf's and here i have the doubt. Inorder to connect Uf's with splunk cloud instance we need to install the UF credential package on each of the UF's. So can i install it with the help of Deployment server? so that i dont have to install it manually by going to each of the UF's.  I would really appreciate if someone can guide me in this. Thanks in advance !
Hi, I am try to get the most recent value and search for specific status item itemdesc _time status ITEM01 COKE 2021-09-21 22:00:05 FAILED ITEM01 COKE 2021-09-20 13:00:15 FAILED ITEM02 COKE 2021... See more...
Hi, I am try to get the most recent value and search for specific status item itemdesc _time status ITEM01 COKE 2021-09-21 22:00:05 FAILED ITEM01 COKE 2021-09-20 13:00:15 FAILED ITEM02 COKE 2021-09-21 21:00:12 PASSED ITEM02 COKE 2021-09-21 20:00:05 PASSED ITEM02 COKE 2021-09-21 19:00:05 FAILED ITEM03 COKE 2021-09-20 12:00:05 FAILED ITEM03 COKE 2021-09-19 11:00:15 PASSED Need to check most recent status by item, and pull only if status = Failed O/p ITEM01 COKE 2021-09-21 22:00:05 FAILED ITEM03 COKE 2021-09-20 12:00:05 FAILED In this case ITEM02 is ignored since most recent status is PASSED  
I need to see if the default encryption between Splunk components be checked via GUI? Am talking about the SSL encryption. Also please help with ensuring delivery of data from FWs to Indexers is ackn... See more...
I need to see if the default encryption between Splunk components be checked via GUI? Am talking about the SSL encryption. Also please help with ensuring delivery of data from FWs to Indexers is acknowledged? 
Hello everyone, I am streaming CloudWatch logs to SPLUNK through Firehose, and I faced the following issue: Some json records are being indexed(?) twice and show up twice in search. The only differ... See more...
Hello everyone, I am streaming CloudWatch logs to SPLUNK through Firehose, and I faced the following issue: Some json records are being indexed(?) twice and show up twice in search. The only difference between the records is the time of indexing. I am trying to figure out how I can debug the issue. Record shows up only once in source log group in cloudwatch and s3 backups. It’s either Firehose sending a particular record twice or SPLUNK processing the same record two times. Do you have an idea how I can check my theories? I didn’t find much useful info in splunk http event collector logs. It has only technical info about the transaction: size/speed/time.
Hi, im attempting to setup the Splunk connect 4 syslog. Im getting some issues and could use some assistance troubleshooting.  It seems the slack is restricted to some corporate emails.  Is there ... See more...
Hi, im attempting to setup the Splunk connect 4 syslog. Im getting some issues and could use some assistance troubleshooting.  It seems the slack is restricted to some corporate emails.  Is there a way to add my company/account so I can register and work with other users for troubleshooting?
I have Monitoring Console in distributed mode on my Cluster Master. Need to learn how do I configure it to show Alerts & warnings I receive on my ES. I see small issues & warnings about the ES but li... See more...
I have Monitoring Console in distributed mode on my Cluster Master. Need to learn how do I configure it to show Alerts & warnings I receive on my ES. I see small issues & warnings about the ES but like to see more. Thanks in advance. 
Greetings,   At my current company, we're using Splunk Cloud and I'm looking to deploy a new Heavy Forwarder to forward data along to the Cloud instance. The question is, what's the appropriate way... See more...
Greetings,   At my current company, we're using Splunk Cloud and I'm looking to deploy a new Heavy Forwarder to forward data along to the Cloud instance. The question is, what's the appropriate way to do this? From Splunk Cloud, I downloaded the Universal Forwarder package from "Apps > Universal Forwarder". I also downloaded the Credential package from there as well. Both have been installed on an internal host (which is intended to be the Heavy Forwarder) and I'm now forwarding data over to Splunk as expected. The only issue is that Splunk is picking it up as a Universal Forwarder when looking at the Cloud Monitoring Console (which makes sense being that I installed the Universal Package). But what I'm really looking to do is deploy a Heavy Forwarder. From what I've read thus far, it looks like I have to install a full Splunk Enterprise instance on the internal host and enable forwarding on it to make it a Heavy Forwarder. How would I best be able to do this, and would I need an additional License do do so?  I'd like to manage the .conf files on the forwarder and create custom field extractions and all that good stuff from the host directly, rather than doing that through the Splunk Cloud UI.   Looking for some additional insight. Thank you in advance!
I want to monitor a log file which gets created everyday with new day date in its name. If I configure inputs.conf as below it will monitor all the older log files of previous days as well which wil... See more...
I want to monitor a log file which gets created everyday with new day date in its name. If I configure inputs.conf as below it will monitor all the older log files of previous days as well which will flood my source list, i want only the latest todays logfile to be monitored. Please suggest.   [monitor:////path to direct/access_log.*] sourcetype = log4j ignoreOlderThan = 7d crcSalt = <string>
Hello, we have built a dashboard on our stack from the data which we are receiving from another stack using the federated search feature, however there is a performance issue in loading this dashboa... See more...
Hello, we have built a dashboard on our stack from the data which we are receiving from another stack using the federated search feature, however there is a performance issue in loading this dashboard. We are seeing the below error very frequently on the dashboard. Socket error during transaction ReadWrite error. Is there a way to improve this dashboard performance?   Thanks
Hello guys!   I use some reports with the    | multireport   command like this:    ...search... | multireport [ | table _time L5PS1GutStk | sort + _time | where L5P... See more...
Hello guys!   I use some reports with the    | multireport   command like this:    ...search... | multireport [ | table _time L5PS1GutStk | sort + _time | where L5PS1GutStk!="" | autoregress L5PS1GutStk | reverse | fillnull | stats count(eval(L5PS1GutStk!=L5PS1GutStk_p1 AND L5PS1GutStk!=0)) as passes1 ] [ | table _time L5PS2GutStk | sort + _time | where L5PS2GutStk!="" | autoregress L5PS2GutStk | reverse | fillnull | stats count(eval(L5PS2GutStk!=L5PS2GutStk_p1 AND L5PS2GutStk!=0)) as passes2 ] ...rest of the search...    This worked until yesterday, when Splunk was updatet from 7.3.3 to 8.2.2.  Then this error occured: It had to be fixed very fast so we created a simpler but more wrong search. Today I was further investigating what went wrong and causes this issue.  If first thought of the not documented multireport command and that this command was removed or something in the new version. But my colleague had a similar search with multireport and it still worked. I removed the whole multireport and it worked again so somethin with the multireport was wrong. Then I removed line after line for its own in the subsearches to figure out the source of the problem.  Finally after removing the |table command in the first line of each subsearch, the whole search was working again!! I found the source. I replaced the table with the field command and everything works well again, crisis averted. I then tested another thing: Just replacing |table with |fields in only 1 subsearch. --> It worked again, no error.    So my question to you guys is: Does anybody know what went wrong here and what are the differences in the Splunk versions to produce this error? Thanks!!   PS to the Splunk Team: Please never delete the multireport and make it official, it is a very useful command!
Hello All,  My issue is: We are receiving files from Source1, where are more types of logs. We want to split them and send them into another indexes. Issue is that there are no some unique fields i... See more...
Hello All,  My issue is: We are receiving files from Source1, where are more types of logs. We want to split them and send them into another indexes. Issue is that there are no some unique fields in the logs, so its difficult to create REGEX in transform.conf.  Sep 21 08:48:29 10.128.38.16 2021-09-21 06:48:28.7004|INFO|Robot|21.09.2021 08:46:21|21.09.2021 08:48:28|AttendedBot|OK|Account: ....   Hence we have decided to put  FLAG there. --  "Y@9?" Sep 21 08:48:29 10.128.38.16 2021-09-21 06:48:28.7004|INFO|Y@9?|Robot|21.09.2021 08:46:21|21.09.2021 08:48:28|AttendedBot|OK|Account: ....    And now I would like to trigger this flag  in transform.conf (lets say send it to new index) and after that remove it by SEDCMD. Everything works fine, SEDCMD and transforms.conf when they are not configured at the same time, but when they are, the SEDMCD is applied first so transfrom.conf has nothing to trigger.    Question is:          Is there possibility to apply transform.conf before SEDCMD?   Thank you very much.