All Topics

Top

All Topics

Hi splunk god, Have enquiry, i have an environment which heavyforwarder logs send to cluster indexer. I need the below multi index merge into single index which is index_general. Basically,... See more...
Hi splunk god, Have enquiry, i have an environment which heavyforwarder logs send to cluster indexer. I need the below multi index merge into single index which is index_general. Basically, when user search index_general and able to search all the logs contain in the three index. 1)Is this configuration feasible? index_fw->index_general index_window->index_general index_linux->index_general 2)If yes, this configuration needs to be done on HF or Indexer? 3)if qns2 yes, which config file should be configured.
Hi I am tracking service requests and responses and trying to create a table that contains both requests and response but requests and responses are in different lines ingested in splunk. I have a c... See more...
Hi I am tracking service requests and responses and trying to create a table that contains both requests and response but requests and responses are in different lines ingested in splunk. I have a common field (trace) which is available in both the strings and unique for a set of request and response pairs,  example line1: trace: 12345 , Request Received: {1}, URL:http:// line2: trace: 12346 , Request Received: {2}, URL:http:// line3: trace:12345 , Reponse provided: {3} line4: trace:12346 , Reponse provided :{4}   In line1 and line 3 trace is common field and so is in line 1 and line 4 I want end result like in a table   trace      request     response 12345   {1}            {3} 12346  {2}            {4}  
Hi all, when I try to update any installed apps from the GUI I receive a 500 internal error. Checking the _internal logs I see this: File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__in... See more...
Hi all, when I try to update any installed apps from the GUI I receive a 500 internal error. Checking the _internal logs I see this: File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 655, in simpleRequest raise splunk.ResourceNotFound(uri) splunk.ResourceNotFound: [HTTP 404] https://127.0.0.1:8089/services/apps/remote/entriesbyid/SplunkAdmins I am on 9.0.3. I don't have a proxy setup. And all my file permissions are fine. Hope someone can help on this one. Thanks.
Hi Experts,  While adding below query to my dashboard i am getting error saying     |eval Category=case (Ratings>"8","Promoter",Ratings > "7","Detractor",Ratings > "6" AND Ratings < "9", "Passi... See more...
Hi Experts,  While adding below query to my dashboard i am getting error saying     |eval Category=case (Ratings>"8","Promoter",Ratings > "7","Detractor",Ratings > "6" AND Ratings < "9", "Passive") Error: Unencoded <   Regards Mayank  
  I have the table below and want to reflect the severity of the duration metric by highlighting the entire row index=... | eval epochtime = strptime(startTime,"%a %m/%d %H:%M %Y") | eval start =... See more...
  I have the table below and want to reflect the severity of the duration metric by highlighting the entire row index=... | eval epochtime = strptime(startTime,"%a %m/%d %H:%M %Y") | eval start = strftime(epochtime,"%a %d/%m/%Y %H:%M") | eval duration = tostring(round(now()-epochtime), "duration") | table time user client start duration    Is it possible to highlight or outline the entire row in red if the duration > 08:00
Hi, new SPLUNKER. I'm trying to download the FREE ENTERPRISE software for studying purposes. I've tried to download in regular and incognito mode, but the "ACCESS PROGRAM" button stays greyed out. ... See more...
Hi, new SPLUNKER. I'm trying to download the FREE ENTERPRISE software for studying purposes. I've tried to download in regular and incognito mode, but the "ACCESS PROGRAM" button stays greyed out. I've "read" the EULA and there is no check box for this.  Am I missing something? Thx,
we are using the tabs extension: tabs.js, tabs.css (from https://github.com/LukeMurphey/splunk-dashboard-tabs-example) today we upgraded our splunk from 8 to 9 and after all tabs are not working , ... See more...
we are using the tabs extension: tabs.js, tabs.css (from https://github.com/LukeMurphey/splunk-dashboard-tabs-example) today we upgraded our splunk from 8 to 9 and after all tabs are not working , they are visible but nothing happen when we click the tab name. I have tried: 1. updating the code to the one in the git link above 2. also solution in https://community.splunk.com/t5/Dashboards-Visualizations/Tabs-in-Splunk-dashboard-not-working-after-Splunk-7-3-upgrade/m-p/482436#M31634 any help would be appreciated 
Dear All  I want monitor Linux device via syslog method when i go to input data  not found Tcp & udp option  Now to add this input option  ? As    
When I use walklex on my indexes, it doesn't appear to be following the time specifications very well. Does anybody know what is/might be happening here? Command: | walklex index=indexName type=f... See more...
When I use walklex on my indexes, it doesn't appear to be following the time specifications very well. Does anybody know what is/might be happening here? Command: | walklex index=indexName type=field | stats count by field Examples for an index:  Index 1: * The buckets generally take about 6 hours to roll from hot to warm. * When I select last 24 hours, I get results from above query like I would expect with a bit of overflow due to the bucket time span, but then there is a couple week gap with some events returned from several weeks prior. Index 2: * Some buckets have upwards of 2 years time span. * When I run walklex over the last 7 days, I get results all the way back to 2017. When I look for the bucket ID and guId of the bucket containing the old results using dbinspect over a 14 day time range, I do not see that local ID combined with the guId. But when I look at all time I find the guId and local ID pair. But the bucket shows as being hot and last edited in January of 2020... which all of the other weird behavior set aside, walklex shouldn't be getting data from hot buckets unless the docs are wrong?
Hello all, Recently I've had to move our current index DB to a new location to free up some storage space. I followed the documentation outlined in: https://docs.splunk.com/Documentation/Splunk/9.... See more...
Hello all, Recently I've had to move our current index DB to a new location to free up some storage space. I followed the documentation outlined in: https://docs.splunk.com/Documentation/Splunk/9.0.3/Indexer/Moveanindex and everything is working fine with exception of the built-in Monitoring Console app. When loading up the resource usage web page for the instance it just appears empty. I tried to narrow down the searches itself and when running the search is just seems that all the dmc macros (dmc_*) aren't working, but if you run the conents of the macro instead of calling the macro it works as expected. Anyone knows why this is happening and the best way to go about fixing it? After DB move
All,  I am working on a App with some custom command and it requires me to start a bit. Is there a way to speed up the restarts? Right now it's about 45 seconds. (8 core, 64gig, m2) with latest Splu... See more...
All,  I am working on a App with some custom command and it requires me to start a bit. Is there a way to speed up the restarts? Right now it's about 45 seconds. (8 core, 64gig, m2) with latest Splunk Container.    Right now I have shell into the Enterprise container and I just sudo restart the splunk bin.  Maybe? ... - Maybe remove some apps? If so which are safe to remove that are packaged with Splunk without causing issues?  - Can I get rid of the splunk web login and log directly in as admin?    Any ideas?  thanks! -Daniel  
Hi  My sources: 1.  /app/splunkser/ShiftNonMinJMC/ShiftNonMinJMC.log 2.  /app/splunkser/ShiftNonMinJMC/ShiftNonMinJMC-show.log 3.  /app/splunkser/ShiftNonMinJMC/ShiftNonMinJMC-ignored-sms.log 4.... See more...
Hi  My sources: 1.  /app/splunkser/ShiftNonMinJMC/ShiftNonMinJMC.log 2.  /app/splunkser/ShiftNonMinJMC/ShiftNonMinJMC-show.log 3.  /app/splunkser/ShiftNonMinJMC/ShiftNonMinJMC-ignored-sms.log 4.  /app/splunkser/ShiftMinJMC/ShiftMinJMC.log 5.  /app/splunkser/ShiftMinJMC/ShiftMinJMC-show.log 6.  /app/splunkser/ShiftMinJMC/ShiftMinJMC-ignored-sms.log 7.  /app/splunkser/ShiftBDRecordJMC/ShiftBDRecordJMC.log 8.  /app/splunkser/ShiftBDRecordJMC/ShiftBDRecordJMC-show.log 9.  /app/splunkser/ShiftBDRecordJMC/ShiftBDRecordJMC-ignored-sms.log I am receive the data from the above sources in SIT  and PROD environment but not receiving  logs from the below sources: 1.  /app/splunkser/ShiftNonMinJMC/ShiftNonMinJMC.log 4.  /app/splunkser/ShiftMinJMC/ShiftMinJMC.log 7.  /app/splunkser/ShiftBDRecordJMC/ShiftBDRecordJMC.log Note: i am getting logs in SIT from all 9 sources but in production the mentioned 1, 4 and 7th sources are not showing up in Production env. Inputs.conf [monitor:///app/splunkser/ShiftNonMinJMC/ShiftNonMinJMC-*.log] disabled=0 index=app-jmc-shift-sms sourcetype=app:jmcshift:logs blacklist=\.(?:tar|gz)$ crcSalt=<SOURCE> [monitor:///app/splunkser/ShiftNonMinJMC/ShiftNonMinJMC-show-*.log] disabled=0 index=app-jmc-shift-sms sourcetype=app:jmcshift:logs blacklist=\.(?:tar|gz)$ crcSalt=<SOURCE> [monitor:///app/splunkser/ShiftNonMinJMC/ShiftNonMinJMC-ignored-*.log] disabled=0 index=app-jmc-shift-sms sourcetype=app:jmcshift:logs blacklist=\.(?:tar|gz)$ crcSalt=<SOURCE> [monitor:///app/splunkser/ShiftMinJMC/ShiftMinJMC-*.log] disabled=0 index=app-jmc-shift-sms sourcetype=app:jmcshift:logs blacklist=\.(?:tar|gz)$ crcSalt=<SOURCE> [monitor:///app/splunkser/ShiftMinJMC/ShiftMinJMC-show-*.log] disabled=0 index=app-jmc-shift-sms sourcetype=app:jmcshift:logs blacklist=\.(?:tar|gz)$ crcSalt=<SOURCE> [monitor:///app/splunkser/ShiftMinJMC/ShiftMinJMC-ignored-*.log] disabled=0 index=app-jmc-shift-sms sourcetype=app:jmcshift:logs blacklist=\.(?:tar|gz)$ crcSalt=<SOURCE> [monitor:///app/splunkser/ShiftBDRecordJMC/ShiftBDRecordJMC-*.log] disabled=0 index=app-jmc-shift-sms sourcetype=app:jmcshift:logs blacklist=\.(?:tar|gz)$ crcSalt=<SOURCE> [monitor:///app/splunkser/ShiftBDRecordJMC/ShiftBDRecordJMC-show-*.log] disabled=0 index=app-jmc-shift-sms sourcetype=app:jmcshift:logs blacklist=\.(?:tar|gz)$ crcSalt=<SOURCE> [monitor:///app/splunkser/ShiftBDRecordJMC/ShiftBDRecordJMC-ignored-*.log] disabled=0 index=app-jmc-shift-sms sourcetype=app:jmcshift:logs blacklist=\.(?:tar|gz)$ crcSalt=<SOURCE> Props.conf [app:jmcshift:logs] TIME_PREFIX=^ TIME_FORMAT=%Y-%m-%d %H:%M:%S.%3N MAX_TIMESTAMP_LOOKAHEAD=30 LINE_BREAKER=([\r\n]+)\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}.\d{3} SHOULD_LINEMERGE=false TRUNCATE=99999 Sample logs: From all 9 sources the events starts with date as shown below: 2023-01-12 23:24:50.245 [error]........................................... Same inputs.cong and props.conf  in SIT and Production env. Not sure what could be the issue.
| loadjob savedsearch="nobody:splunk_fcr_evo:monitoring" | table adt, FLOW, Date, NbRecordsOKFCR, Total, NbRecords, NBFile, NA1, NA2, NA3, CM, Alert | where match(FLOW, "$Flow_token$") and match(ad... See more...
| loadjob savedsearch="nobody:splunk_fcr_evo:monitoring" | table adt, FLOW, Date, NbRecordsOKFCR, Total, NbRecords, NBFile, NA1, NA2, NA3, CM, Alert | where match(FLOW, "$Flow_token$") and match(adt, "$adt_token$") $filter_green_lights$ | fields adt FLOW Date NA1, NA2, NA3,CM, "Total" | sort adt, Date
Monitoring & Alerting for noise in an audio file? Hi, I am currently having a spy audio recorder for my daughter kindergarten, since there’s been an increase of violence on the news lately. The re... See more...
Monitoring & Alerting for noise in an audio file? Hi, I am currently having a spy audio recorder for my daughter kindergarten, since there’s been an increase of violence on the news lately. The recorder generates an .WAV file, that can last for more than 24h a day. My question is, does Splunk has the ability to upload such a file, and identify “events” based on a condition that could then be alerts? When you view an audio file, there’s that “line” that moves up and down depends on the sound volume, right? I know that hearing a specific word might not be possible, but maybe when the staff screams and that volume line is at very top height, so generate an alert? Or when there is a continuous “high” line because of high volume that could hint on a crying. This is very important for me, would appreciate ideas. Thank you!
Hello guys,  I want a simple thing. Install an ODBC driver to my Win PC and pull data from Splunk to PowerBI. However, I don't know how I can authenticate. Our company uses Azure AD authentification... See more...
Hello guys,  I want a simple thing. Install an ODBC driver to my Win PC and pull data from Splunk to PowerBI. However, I don't know how I can authenticate. Our company uses Azure AD authentification SSO so I don't have any admin/password credentials. Does anyone have any experience with it?   I go against: company.splunkcloud.com Auth: Azure AD  
My network has 34 hosts with universal forwarder setup on each of them but only 5 of them are forwarding their logs to indexer. So what shall I do to overcome this problem ? PS. The firewall on t... See more...
My network has 34 hosts with universal forwarder setup on each of them but only 5 of them are forwarding their logs to indexer. So what shall I do to overcome this problem ? PS. The firewall on the indexer has already been turned off, to avoid any package drop.
Apologies if this belongs in a different location but it seemed like the best with the choices available. I am looking to host a .txt file within my custom app that can be retrieved as a EDL (exter... See more...
Apologies if this belongs in a different location but it seemed like the best with the choices available. I am looking to host a .txt file within my custom app that can be retrieved as a EDL (external dynamic list) by my firewall.  I have been successful at building a .html document in the app's ./appserver/static/ directory. This works fine but I am concerned this will not work for an EDL. I really need it to be a .txt file (unless someone has had success with .html files and would like to share).  To be clear, this works just fine: https://splunk.example.com:8000/en-US/static/app/example_app/edl.html But this does not: https://splunk.example.com:8000/en-US/static/app/example_app/edl.txt If anybody knows how to make this work or could explain why it is not possible I would appreciate it. Thank you,
Hello Splunk experts, I would like to simplify some complex SPL queries that search for certain events and apply tags to them according to various business rules based on both keyword searching and... See more...
Hello Splunk experts, I would like to simplify some complex SPL queries that search for certain events and apply tags to them according to various business rules based on both keyword searching and pattern matching. The events come from a ticketing system with many attributes, but I will simplify it thus: ID DATE GROUP NOTES SERVER Ticket # Ticket submit date/time Assigned group Details of ticket Affected server   I need to add a new field, called BUCKET, to calculate and store the type of ticket, based on a matrix like this: BUCKET GROUPS KEYWORDS SERVERS AAA g1, g2 f1, f2 abc* BBB g3, g4 f3 server123, xyz* CCC g5, g6 f9 *dmz*   For example, BUCKET should be set to AAA when a ticket event arrives for which group is g1 or g2 and the notes contain keywords f1 or f2 and the server name begins with abc.  I have some some nasty queries currently which have a bunch of where/if/case operations to do the tagging currently. And for each bucket the query has to scan through the same set of ticket events. I would really like to move this business logic to a simple lookup-type CSV file so it can be easily be updated without modifications to any savedsearches or dashboards, and the tagging can be done in a single pass for all buckets by my current scheduled savedsearch which processes the raw ticket data ingested from the ticketing system via DBX. In reality, I have a dozen different buckets, ~50 different groups, and a similar number of keywords. I only have one bucket that actually needs pattern matching on the server name, but it would be nice to support full pattern matching. Our operators ultimately have a trellis-type scorecard dashboard where they have a box for each bucket that shows the current number of tickets and is colored when the number goes above certain levels. When the operator clicks on the bucket number, the are sent to a drilldown that shows a table of the ticket details, and that drilldown has dynamic drilldown hyperlinks directly into the ticketing system. I am imagining this tagging could be done with a fancy lookup somehow. I already have the bucket matrix in a lookup csv file. I have played with using format command to generate the appropriate nested boolean AND/OR search logic with a foreach loop but foreach doesn't seem to know now to iterate down a column in a csv. Does my challenge seem doable? Can anyone share or point me to some example code to use multiple patterns stored in a lookup csv file? FYI using Splunk Enterprise 8.1.9 and I DO NOT have any CLI access to either the SHC or indexers. Thanks for any tips.
I am trying to create an alert when the field toState changes to OPEN and stays in that OPEN state for 5 minutes. I have tried the following but it is not working. Would appreciate if I get some poin... See more...
I am trying to create an alert when the field toState changes to OPEN and stays in that OPEN state for 5 minutes. I have tried the following but it is not working. Would appreciate if I get some pointers.    ... CB_STATE_TRANSITION | timechart span=5m count(toState="OPEN") as state | stats count | where count > 1 I have the alert run every 5 minutes and triggers when the number of results > 0. 
Hi, My Strptime function is not working for the below format. date format: 1/13/23 11:44:11.543 AM eval  time_epoc= strptime(_time, "%m/%d/%Y  %I:%M:%S.%3N %p")