All Topics

Top

All Topics

I want to enable/disable splunk alerts in splunk cloud, How to disable/enable alerts using rest api spl commands in splunk cloud?
I'm struggling to effectively use a minor amount of javascript which is intended to facilitate some in-dashboard help pages, and hoping that someone might be able to help me out.   Given this javas... See more...
I'm struggling to effectively use a minor amount of javascript which is intended to facilitate some in-dashboard help pages, and hoping that someone might be able to help me out.   Given this javascript (in <app>/appserver/static/js/help.js): require([ 'jquery', 'splunkjs/mvc/simplexml/ready!' ], function($) { $('#help-btn').on( 'click', function() { $('#help').toggle(); }); $('#help-close').on( 'click', function() { $('#help').toggle(); }); });   And this dashboard which adds a "Dashboard Help" button (next to Edit), the idea would be that simply clicking the button will toggle() the #help id, which is the panel. In "Edit" mode, I can see my overview.html panel as well as the "Dashboard Help" button, and clicking it will perform the correct action. I've used this on live dashboards as well, successfully on some and not so much on others. I imagine there's some sort of timing condition with dynamic loading and whatnot that I'm running into. Can anyone advise? <form version="1.1" script="common_ui_util:/js/help.js"> <label>TEST</label> <row> <panel> <html> <div> <input id="help-btn" type="button" value="Dashboard Help" style="top: -40px;" class="btn default"></input> </div> </html> </panel> </row> <row depends="$HIDEME$"> <panel id="help"> <html> <button id="help-close" class="help-close close close_btn"/> </html> <html src="html_docs/overview.html"></html> </panel> </row> </form>  
In a customer environment, we disabled all the rules from ABAP agent. As you can see in the image below We then stopped the ABAP agent going into status, stop all. And wait 15min to start it a... See more...
In a customer environment, we disabled all the rules from ABAP agent. As you can see in the image below We then stopped the ABAP agent going into status, stop all. And wait 15min to start it again. After restarted and all rules disabled, we're still receiving some BTs in controller. I'd like a help with that, please. Thank you. Luiz Polli
Considering our current setup i.e authentication and Authorization integrated with SAML, how do we 1. mark an user inactive 2. what do we do with his/her knowledge objects.
In a world where diversity is celebrated and inclusion is the cornerstone of progress, it is imperative that organizations champion the cause of inclusivity for individuals with disabilities. As we o... See more...
In a world where diversity is celebrated and inclusion is the cornerstone of progress, it is imperative that organizations champion the cause of inclusivity for individuals with disabilities. As we observe National Disability Employment Awareness Month (NDEAM) this October, Splunk stands firm in its commitment to promoting equitable opportunities and fostering an environment that embraces the unique capabilities of all its employees and its customers. The History of NDEAM Sometimes it’s helpful to know where we started in order to see how far we’ve come. Initially established in 1945 to advocate for the employment needs of individuals with disabilities, it has evolved into a month-long celebration that emphasizes the value of diversity and equal opportunities. The year’s NDEAM theme is, "Advancing Access and Equity," which resonates deeply within Splunk's culture of inclusivity and is being fully embraced by Splunk Disabilities Employee Resource Group (Disabled=True) with inclusive activities, awareness, microlearning, and community involvement.     NDEAM – and Beyond – at Splunk With the core belief that every individual deserves an equal chance to succeed, Splunk not only honors NDEAM with learning and events, but works year-round to advocate for and educate employees about the value of honoring the unique skills and perspectives that individuals with disabilities bring to the workplace. A crucial component of this commitment is the emphasis on raising employee awareness about the scope of disabilities – helping everyone recognize that disability is a natural part of the human experience. If we don’t work towards advancing equity and access in the workplace, it’s almost impossible for our community to work safely and authentically together. Inclusivity and Accessibility in Splunk Education & Certification A more educated and inclusive workforce also translates into a more inclusive learning environment. From accessible learning materials presented in multiple formats to providing reasonable accommodations for certification exams, we are dedicated to ensuring that every employee, learner, and customer, regardless of their abilities, has the opportunity to learn and grow with Splunk. Employee awareness about the scope of disabilities and the learning experience of people with disabilities have led to the following offerings:  Accessible Learning Materials: Splunk strives to make its educational materials, including documentation, videos, and training courses, accessible to learners with disabilities. This includes providing content in multiple formats, such as text, audio, and video, with proper captions and transcriptions in multiple languages. Reasonable Accommodations: Splunk partners with Pearson Vue to offer reasonable accommodations to disabled learners looking to gain certifications. These accommodations might include extended time for exams, special software or hardware, or personalized support. Please review our policy to see if you're eligible for any special accommodations when taking an exam. (See page 9 of the Splunk Certification Candidate Handbook for more information.)  Web Accessibility: Our online learning platform, STEP/Cornerstone SBX, follows the highest accessibility standards, Web Content Accessibility Guidelines (WCAG) 2.1 AA, so that users with disabilities can navigate and interact with them effectively. Training for Instructors and Staff: We provide training to instructors and staff members to ensure they are knowledgeable about accessibility best practices and can assist disabled learners effectively – fostering the Splunk culture of diversity and inclusion that encourages participation and learning for all. Feedback: Splunk encourages learners to report accessibility issues or provide suggestions for improvement through feedback/surveys, to support continuous improvement and more accessible learning.  Please join us as we celebrate NDEAM this October and continue to champion inclusivity and diversity within the community. Splunk remains dedicated to fostering a community and work environment where every individual has the opportunity to thrive, learn, and share their unique point of view.  – Alexandria Mitchell and Callie Skokos on behalf of the Splunk Education Crew
Hi community, | eval ycw = strftime(_time, "%Y_%U") | stats count(eval("FieldA"="True")) as FieldA_True,               count(eval('FieldB'="True")) as FieldB_True,               count(eval('Field... See more...
Hi community, | eval ycw = strftime(_time, "%Y_%U") | stats count(eval("FieldA"="True")) as FieldA_True,               count(eval('FieldB'="True")) as FieldB_True,               count(eval('FieldC'="True")) as FieldC_True by ycw | table ycw, FieldA_True, FieldB_True, FieldC_True I get 0 result even though there is data. Could anyone please suggest a correct query? BR
We are in the process of deploying our endpoint logging strategy. Right now, we are using CrowdStrike as our EDR. As far as I can tell if we wanted to use the logs collected by the CrowdStrike agent ... See more...
We are in the process of deploying our endpoint logging strategy. Right now, we are using CrowdStrike as our EDR. As far as I can tell if we wanted to use the logs collected by the CrowdStrike agent and forward that into Splunk we have to pay for the FDR license, which at the moment due to budget constraints we cannot. When I look at the correlation searches that utilize the Endpoint Data model most of those detections are based on data that originates from Endpoint Detection and Response (EDR) agents. Since in our case we cannot utilize that data coming from CrowdStrike, could we use Sysmon instead to collect the data that we need to implement those corrections searches? This is one of the use cases that I was interested in implementing https://research.splunk.com/endpoint/1a93b7ea-7af7-11eb-adb5-acde48001122/
Does anyone have SPL to monitor for capacity % on an index cluster? I'd like to watch each indexer/data volume and receive an alert if they breach a 90% threshold.
I am new to Splunk and I have the following message which I would like to parse into a table of columns:     {dt.trace_id=837045e132ad49311fde0e1ac6a6c18b, dt.span_id=169aa205dab448fc, dt.trace_sa... See more...
I am new to Splunk and I have the following message which I would like to parse into a table of columns:     {dt.trace_id=837045e132ad49311fde0e1ac6a6c18b, dt.span_id=169aa205dab448fc, dt.trace_sampled=true} { "correlationId": "3-f0d89f31-6c3c-11ee-8502-123c53e78683", "message": "API Request", "tracePoint": "START", "priority": "INFO", "category": "com.cfl.api.service", "elapsed": 0, "timestamp": "2023-10-16T15:59:09.051Z", "content": { "clientId": "", "attributes": { "headers": { "accept-encoding": "gzip,deflate", "content-type": "application/json", "content-length": "92", "host": "hr-fin.svr.com", "connection": "Keep-Alive", "user-agent": "Apache-HttpClient/4.5.5 (Java/16.0.2)" }, "clientCertificate": null, "method": "POST", "scheme": "https", "queryParams": {}, "requestUri": "/cfl-service-api/api/process", "queryString": "", "version": "HTTP/1.1", "maskedRequestPath": "/api/queue/send", "listenerPath": "/cfl-service-api/api/*", "localAddress": "/localhost:8082", "relativePath": "/cfl-service-api/api/process", "uriParams": {}, "rawRequestUri": "/cfl-service-api/api/process", "rawRequestPath": "/cfl-service-api/api/process", "remoteAddress": "/123.123.123.123:123", "requestPath": "/cfl-service-api/api/process" } }, "applicationName": "cfl-service-api", "applicationVersion": "6132", "environment": "dev", "threadName": "[cfl-service-api].proxy.BLOCKING @78f55ba" }       Thank you so much for your help.
A user is unable to access investigations in Enterprise Security (version ES 7.1.1) on Splunk Cloud (Splunk 9.0.2) . When clicking on investigations from the main menu the message "You do not have pe... See more...
A user is unable to access investigations in Enterprise Security (version ES 7.1.1) on Splunk Cloud (Splunk 9.0.2) . When clicking on investigations from the main menu the message "You do not have permissions to access investigations" appears. The user is assigned the ESS Analyst role which includes the capability Manage-All-Investigations.  any ideas?     Thanks in advance.  
Hello, Below Column Chart Results and visualization, I wanted to show different colors for field Values. like AU Pre & AU Post as one color  DE Pre & DE Post as one color.  I'm using this b... See more...
Hello, Below Column Chart Results and visualization, I wanted to show different colors for field Values. like AU Pre & AU Post as one color  DE Pre & DE Post as one color.  I'm using this but doesn't work  <option name="charting.fieldColors">{"AU Pre":0x333333,"AU Post":0xd93f3c,"JP Pre":0xeeeeee,"JP Post":0x65a637,"DE Pre":0xeeeeee,"DE Post":0x65a637}</option> market           limit             spend AU Pre 1462912 884854 AU Post 2160567 1166031 DE Pre 91217 76973 DE Post 160221 97906 JP Pre 1621712 1015115 JP Post 2787394 1282541              
Hi, As I was wondering can we blacklist the processname like "-"  in the inputs.conf of DS ?? to save the splunk license .   Sample Event: <Event xmlns='http://schemas.microsoft.com/win/20... See more...
Hi, As I was wondering can we blacklist the processname like "-"  in the inputs.conf of DS ?? to save the splunk license .   Sample Event: <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{54849625-5478-4994-a5ba-3e3b0328c30d}'/><EventID>4624</EventID><Version>3</Version><Level>0</Level><Task>12544</Task><Opcode>0</Opcode><Keywords>0x8020000000000000</Keywords><TimeCreated SystemTime='2023-10-17T16:07:15.4402877Z'/><EventRecordID>455140</EventRecordID><Correlation ActivityID='{b2071651-382e-4101-85e8-28f5e9b1b5d5}'/><Execution ProcessID='1112' ThreadID='3816'/><Channel>Security</Channel><Computer>xyz.com</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>NULL SID</Data><Data Name='SubjectUserName'>-</Data><Data Name='SubjectDomainName'>-</Data><Data Name='SubjectLogonId'>0x0</Data><Data Name='TargetUserSid'>NT AUTHORITY\SYSTEM</Data><Data Name='TargetUserName'>xxx$</Data><Data Name='TargetDomainName'>xyx.COM</Data><Data Name='TargetLogonId'>0xb126027</Data><Data Name='LogonType'>3</Data><Data Name='LogonProcessName'>Kerberos</Data><Data Name='AuthenticationPackageName'>Kerberos</Data><Data Name='WorkstationName'>-</Data><Data Name='LogonGuid'>{c425351a-8525-d2f0-f686-1a0aff9db449}</Data><Data Name='TransmittedServices'>-</Data><Data Name='LmPackageName'>-</Data><Data Name='KeyLength'>0</Data><Data Name='ProcessId'>0x0</Data><Data Name='ProcessName'>-</Data><Data Name='IpAddress'>127.0.0.1</Data><Data Name='IpPort'>0</Data><Data Name='ImpersonationLevel'>%%1833</Data><Data Name='RestrictedAdminMode'>-</Data><Data Name='RemoteCredentialGuard'>-</Data><Data Name='TargetOutboundUserName'>-</Data><Data Name='TargetOutboundDomainName'>-</Data><Data Name='VirtualAccount'>%%1843</Data><Data Name='TargetLinkedLogonId'>0x0</Data><Data Name='ElevatedToken'>%%1842</Data></EventData></Event>   Thanks
Hello,   I am trying to set-up a ShowPanel token that I will use later on in the dashboard in this way:   <panel depends="$ShowPanel$">     Here is how I am setting it up   <input type="... See more...
Hello,   I am trying to set-up a ShowPanel token that I will use later on in the dashboard in this way:   <panel depends="$ShowPanel$">     Here is how I am setting it up   <input type="dropdown" token="tok_usecase" searchWhenChanged="true"> <label>Use Case</label> <choice value="&quot;01&quot;">Use Case 01</choice> <choice value="&quot;02&quot;">Use Case 02</choice> <choice value="&quot;03&quot;">Use Case 03</choice> <choice value="&quot;04&quot;">Use Case 04</choice> <choice value="&quot;05&quot;">Use Case 05</choice> <choice value="&quot;06&quot;">Use Case 06</choice> <default>"01"</default> <initialValue>"01"</initialValue> <change> <condition match="$tok_usecase$==&quot;05&quot;"> <set token="ShowPanel">true</set> <unset token="tok_audit"></unset> <unset token="tok_taskID"></unset> <unset token="tok_sessID"></unset> </condition> <condition match="$tok_usecase$!=&quot;05&quot;"> <unset token="ShowPanel"></unset> <unset token="tok_audit"></unset> <unset token="tok_taskID"></unset> <unset token="tok_sessID"></unset> </condition> </change> </input>     but it seems the $ShowPanel$ token is not valorized. I tried to print it in the panel title but it appears as $ShowPanel$ instead of having the value equal to "true". Do you know where the error is? Thanks a lot, Edoardo
{ [-]    logger: org.mule.runtime.core.internal.processor.LoggerMessageProcessor    message: Received update request IL_Customer. Size of array: 1    properties: { [-]      correlationId: 4b910aa... See more...
{ [-]    logger: org.mule.runtime.core.internal.processor.LoggerMessageProcessor    message: Received update request IL_Customer. Size of array: 1    properties: { [-]      correlationId: 4b910aaf-d316-4594-8eda-c56e861499d3       I want to extract the IL_customer and array size from the above log. What will be the regular expression. Thanks in Advance  
Hello, all! Im hopefully looking for an ELI5 (explain like im 5) on the best way to migrate indexer cluster database to an entirely new cluster environment. The end goal is to decommission the cur... See more...
Hello, all! Im hopefully looking for an ELI5 (explain like im 5) on the best way to migrate indexer cluster database to an entirely new cluster environment. The end goal is to decommission the current setup. My current setup.  RHEL 7, physical,  splunk 8.2.4. All log sources are still flowing to this setup. 3sh cluster, 3 idx cluster, 1cm, etc. New: RHEL 8,  AWS/VM's, splunk 9.1.1. This setup is still empty with no logs/sources flowing here yet. 3sh cluster, 3 idx cluster, 1cm, etc. From what i found online.. merging the 3 new indexers into the old cluster seems to be the preferred method.  Does anyone have a link to a detailed writeup on how to do so with all the little nuances comes with it?  are differing splunk versions okay?  do i change rep factor? im sure there are a bunch of steps to this method. I appreciate any help!
  It's my understanding that the default frozenTimePeriodInSecs is 6 years.  I am confused about what this graph means for my _internal index. This is Data Age vs Frozen Age. 1. I don' t have 1.461... See more...
  It's my understanding that the default frozenTimePeriodInSecs is 6 years.  I am confused about what this graph means for my _internal index. This is Data Age vs Frozen Age. 1. I don' t have 1.461 days of data because my deployment isn't that old. 2. This appears that my frozen age is 30 days?  I'm not really sure that this number means.    
So I have the following search and I want to create a dashboard with separate columns for "Hits" and "Misses". Seems this should be pretty straightforward but I am lost joins, stats, evals etc: c... See more...
So I have the following search and I want to create a dashboard with separate columns for "Hits" and "Misses". Seems this should be pretty straightforward but I am lost joins, stats, evals etc: cf_org_name="ABB" cf_space_name="qa" cf_app_name=*qa-my-app* index=ocf_* "CACHE Hit" OR "CACHE Miss" | timechart span=1d count by type How can I convert this to a chart with 2 columns which show Hits and Misses per day? Thanks
Hi All, Appreciate some suggestions for a problem I'm facing. I have a search which outputs a few results, and what I want to do is, take each results _time and modify the earliest and latest times ... See more...
Hi All, Appreciate some suggestions for a problem I'm facing. I have a search which outputs a few results, and what I want to do is, take each results _time and modify the earliest and latest times to be within +/- 1 minute of the events, and pass on a value from a certain field to a second search.  I have looked at other answers and I can see suggestions for using subsearches, and also the map command. The problem is that though, the events from the original search are not kept this way. With the map command you can pass specific fields from the first search to be kept using further evals, however this gets tedious when you want to keep as many fields as possible. Example: First search (checks for 'file create' events in sysmon:      index=sysmon EventId=11 file_name=test_file* file_name="test_file.txt"     Let's say this produces 3 results, with 3 different times and 3 different users. time 1 test_file.txt user 1 time 2 test_file.txt user 2 time 3 test_file.txt  user 3 Bear in mind there would be other fields too in the actual events. Then what I would like to do is, take time 1 for example, extend the time range by 1 minute either side, and use a second search to pass in the file name and user name to see where this file was downloaded from. Second search:     index=web file_name=test_file.txt earliest=(time1 - 1min) latest=(time1 + 1min) user=user1       This should give me an additional event with the corresponding file download (with url etc.) , whilst keeping the 3 events from the 1st search. So when you look at all events, you would have both the file download event from the web index, and the file create event from sysmon, while keeping all the fields and values from both events.    Appreciate any ideas. Thanks!        
Hi All, we have some process related service like application services running in windows, how can i get those status. ex below    cybAgent.bin event_demon -A as_server -A
Hi All, I need help building a SPL that would return all available fields mapped to their sourcetypes/source  Looking across all Indexers crawling through all indexes index=* I currently use to... See more...
Hi All, I need help building a SPL that would return all available fields mapped to their sourcetypes/source  Looking across all Indexers crawling through all indexes index=* I currently use to strip off all the fields and their extracted fields but I have no idea where they are coming from, what is their sourcetype and source: index=* fieldsummary | search values!="[]" | rex field=values max_match=0 "\{\"value\":\"(?<extracted_values>[^\"]+)\"" | fields field extracted_values Thank you!