All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to hit a URL from splunk using curl command. The end point needs an header to be passed with the key values as "X-User-ID". I am passing the value as below but I still get an error messag... See more...
I am trying to hit a URL from splunk using curl command. The end point needs an header to be passed with the key values as "X-User-ID". I am passing the value as below but I still get an error message stating that the value is not passed in the header. Can you help me understand if my query is incorrect.  | eval header="{\"X-User-ID\": \"myuserid\"}" | curl method=get headerfield=header uri="https://domainname/getstatusData?startTime=2021-07-07%2008:50:00&endTime=2021-07-07%2012:50:00&authenticationBearer=na" debug=true curl message: "message":"Missing request header \u0027X-User-ID\u0027 for method parameter of type String" I am able to invoke this from POSTMAN without any issues. 
HI, In the below event example   7/8/2021 12:21:17 PM 111C PACKET 0000020BF8B054E0 UDP Snd xx.xx.xx.xx d6b8 R Q [8085 A DR NOERROR] AAAA (13)PC-3QZT282-D(7)system(2)us(0) 7/8/2021 12:21:17 PM 111... See more...
HI, In the below event example   7/8/2021 12:21:17 PM 111C PACKET 0000020BF8B054E0 UDP Snd xx.xx.xx.xx d6b8 R Q [8085 A DR NOERROR] AAAA (13)PC-3QZT282-D(7)system(2)us(0) 7/8/2021 12:21:17 PM 111C PACKET 0000020BF8B054E0 UDP Snd xx.xx.xx.xx d6b8 R Q [8085 A DR NOERROR] AAAA (13)PC-3QBVQ11-D(7)maintenance(2)us(0) I am trying to have a regex that would exclude the first entry from being indexed. Below is the config i have props.conf [Test:system]                                                                                                                                                                                TRANSFORMS-null = setnull   transforms.conf [setnull] REGEX = .+system.+ DEST_KEY = queue FORMAT = nullQueue   After restarting splunk on the heavy forwarder, it still indexes the first event with system in the event.  Please assist. Thanks  
Hi,  New to Splunk, I am trying to create a Test Automation dashboard. I have the following JSON in my SPLUNK events which has come from Jenkins jobs. I would like parse  the following  fields from ... See more...
Hi,  New to Splunk, I am trying to create a Test Automation dashboard. I have the following JSON in my SPLUNK events which has come from Jenkins jobs. I would like parse  the following  fields from this JSON. I tried with spath but did not work. Can someone help? Classname Testname Status stderr (only the first line) Nexis.Auto.Environment  { [-] build_number: 232 build_url: job/Dev/job/gob/TestAutomation/job/Regression/232/ event_tag: build_report job_name: Dev/TestAutomation/Regression job_result: UNSTABLE metadata: { Nexis.Auto.Browser: chrome headless Nexis.Auto.Environment: CERT1 Nexis.Auto.IsRemote: false Nexis.Auto.Platform: Nexis.Auto.Version: RELEASE_KEY: r1080 SPECFLOW_BUILD_NUMBER: TEST_FILTER: TestCategory=regression } page_num: 13 testsuite: { duration: 1800.8512 errors: 0 failures: 9 passes: 16 skips: 0 testcase: [ { classname: HomePage.#()::TestAssembly:TestAutomation duration: 80.61503 failedsince: 0 groupname: MSTestSuite skipped: false status: PASSED stdout: Given the user is on the L landing page -> done: GivenTheUserIsOnTheLLandingPage() (1.8s) And selects the source "CNN Wire" from "Home" page -> done: GivenSelectsTheSourceFromPage("CNN Wire", "Home") (8.3s) When searches for "crime" -> done: GivenSearchesFor("crime") (3.1s) Then the results page is displayed -> done: ThenTheResultsPageIsDisplayed() (0.1s) And the Narrow by filter section is displayed with "CNN Wire" -> done: ThenTheNarrowByFilterSectionIsDisplayedWith("CNN Wire") (1.0s) testname: Search results with Source uniquename: gHomePage.#()::TestAssembly:TestAutomationSearch results with Source and Segment Search, Wire } { classname: TestAutomationSpecs/DisableTransactionalAccessUsers.#()::TestAssembly:g.TestAutomation duration: 51.68238 errordetails: Timed out after 30 seconds -> no such element: Unable to locate element: {"method":"css selector","selector":"li[class='navitemselected']"} (Session info: headless chrome=91.0.4472.124) (Driver info: chromedriver=2.39.562718 (9a2698cba08cf5a471a29d30c8b3e12becabb0e9),platform=Windows NT 6.3.9600 x86_64) errorstacktrace: OpenQA.Selenium.WebDriverTimeoutException: Timed out after 30 seconds ---> OpenQA.Selenium.NoSuchElementException: no such element: Unable to locate element: {"method":"css selector","selector":"li[class='navitemselected']"} (Session info: headless chrome=91.0.4472.124) (Driver info: chromedriver=2.39.562718 (9a2698cba08cf5a471a29d30c8b3e12becabb0e9),platform=Windows NT 6.3.9600 x86_64) at OpenQA.Selenium.Remote.RemoteWebDriver.UnpackAndThrowOnError(Response errorResponse) at OpenQA.Selenium.Remote.RemoteWebDriver.Execute(String driverCommandToExecute, Dictionary`2 parameters) at OpenQA.Selenium.Remote.RemoteWebDriver.FindElement(String mechanism, String value) at OpenQA.Selenium.Remote.RemoteWebDriver.FindElementByCssSelector(String cssSelector) at OpenQA.Selenium.By.<>c__DisplayClass23_0.<CssSelector>b__0(ISearchContext context) at OpenQA.Selenium.By.FindElement(ISearchContext context) at OpenQA.Selenium.Remote.RemoteWebDriver.FindElement(By by) at OpenQA.Selenium.Support.UI.ExpectedConditions.<>c__DisplayClass19_0.<ElementToBeClickable>b__0(IWebDriver driver) at OpenQA.Selenium.Support.UI.DefaultWait`1.Until[TResult](Func`2 condition) --- End of inner exception stack trace --- at OpenQA.Selenium.Support.UI.DefaultWait`1.ThrowTimeoutException(String exceptionMessage, Exception lastException) at OpenQA.Selenium.Support.UI.DefaultWait`1.Until[TResult](Func`2 condition) at LexisL.TestAutomation.Core.WebDriver.WaitUntil[T](Func`2 expectedCondition, Int32 timeoutInSeconds) in D:\BuildAgent\_work\16\s\Core\Originals\WebDriver.cs:line 338 at g.TestAutomation.PageObjects.PageObjects.L.LLandingPage.SelectSecondarySearch(String searchType) in c:\jenkins\workspace\Dev\g\TestAutomation\g_L_Specflow_Build\g.TestAutomation.PageObjects\PageObjects\L\LLandingPage.cs:line 707 at LexisL.TestAutomation.L.Spec.Specs.StepDefinition.NavigationSteps.GivenTheUserIsOnSecondLevelPage(String searchLevel) in c:\jenkins\workspace\Dev\g\TestAutomation\g_L_Specflow_Build\g.TestAutomation.L.Specs\Specs\L\StepDefinitions\LNavigationSteps.cs:line 23 at lambda_method(Closure , IContextManager , String ) at TechTalk.SpecFlow.Bindings.BindingInvoker.InvokeBinding(IBinding binding, IContextManager contextManager, Object[] arguments, ITestTracer testTracer, TimeSpan& duration) at TechTalk.SpecFlow.Infrastructure.TestExecutionEngine.ExecuteStepMatch(BindingMatch match, Object[] arguments) at TechTalk.SpecRun.SpecFlowPlugin.Runtime.RunnerTestExecutionEngine.ExecuteStepMatch(BindingMatch match, Object[] arguments) at TechTalk.SpecFlow.Infrastructure.TestExecutionEngine.ExecuteStep(IContextManager contextManager, StepInstance stepInstance) at TechTalk.SpecFlow.Infrastructure.TestExecutionEngine.OnAfterLastStep() at TechTalk.SpecFlow.TestRunner.CollectScenarioErrors() at g.TestAutomation.L.Specs.Specs.L.Features.LLDisableTransactionalAccessUsersFeature.ScenarioCleanup() at g.TestAutomation.L.Specs.Specs.L.Features.LLDisableTransactionalAccessUsersFeature.DTAUser_PowerSearch_Filters_LegalContentResults(String hLCTList, String[] exampleTags) in c:\jenkins\workspace\Dev\g\TestAutomation\g_L_Specflow_Build\g.TestAutomation.L.Specs\Specs\L\Features\LDisableTransactionalAccess.feature:line 44 at g.TestAutomation.L.Specs.Specs.L.Features.LLDisableTransactionalAccessUsersFeature.DTAUser_PowerSearch_Filters_LegalContentResults_CasesStatutesAndLegislationAdministrativeAndAgencyMaterialsAdministrativeCodesAndRegulationsLawReviewsAndJournals() in c:\jenkins\workspace\Dev\g\TestAutomation\g_L_Specflow_Build\g.TestAutomation.L.Specs\Specs\L\Features\LDisableTransactionalAccess.feature:line 34 at TechTalk.SpecRun.Framework.TaskExecutors.StaticOrInstanceMethodExecutor.ExecuteInternal(ITestThreadExecutionContext testThreadExecutionContext) at TechTalk.SpecRun.Framework.TaskExecutors.StaticOrInstanceMethodExecutor.Execute(ITestThreadExecutionContext testThreadExecutionContext) at TechTalk.SpecRun.Framework.TestAssemblyExecutor.ExecuteTestNodeTask(TestNode testNode, ITaskExecutor task, TraceEventType eventType) failedsince: 204 groupname: MSTestSuite skipped: false status: FAILURE stderr: Timed out after 30 seconds -> no such element: Unable to locate element: {"method":"css selector","selector":"li[class='navitemselected']"} (Session info: headless chrome=91.0.4472.124) (Driver info: chromedriver=2.39.562718 (9a2698cba08cf5a471a29d30c8b3e12becabb0e9),platform=Windows NT 6.3.9600 x86_64) OpenQA.Selenium.WebDriverTimeoutException: Timed out after 30 seconds ---> OpenQA.Selenium.NoSuchElementException: no such element: Unable to locate element: {"method":"css selector","selector":"li[class='navitemselected']"} (Session info: headless chrome=91.0.4472.124) (Driver info: chromedriver=2.39.562718 (9a2698cba08cf5a471a29d30c8b3e12becabb0e9),platform=Windows NT 6.3.9600 x86_64) at OpenQA.Selenium.Remote.RemoteWebDriver.UnpackAndThrowOnError(Response errorResponse) at OpenQA.Selenium.Remote.RemoteWebDriver.Execute(String driverCommandToExecute, Dictionary`2 parameters) at OpenQA.Selenium.Remote.RemoteWebDriver.FindElement(String mechanism, String value) at OpenQA.Selenium.Remote.RemoteWebDriver.FindElementByCssSelector(String cssSelector) at OpenQA.Selenium.By.<>c__DisplayClass23_0.<CssSelector>b__0(ISearchContext context) at OpenQA.Selenium.By.FindElement(ISearchContext context) at OpenQA.Selenium.Remote.RemoteWebDriver.FindElement(By by) at OpenQA.Selenium.Support.UI.ExpectedConditions.<>c__DisplayClass19_0.<ElementToBeClickable>b__0(IWebDriver driver) at OpenQA.Selenium.Support.UI.DefaultWait`1.Until[TResult](Func`2 condition) --- End of inner exception stack trace --- at OpenQA.Selenium.Support.UI.DefaultWait`1.ThrowTimeoutException(String exceptionMessage, Exception lastException) at OpenQA.Selenium.Support.UI.DefaultWait`1.Until[TResult](Func`2 condition) at LexisL.TestAutomation.Core.WebDriver.WaitUntil[T](Func`2 expectedCondition, Int32 timeoutInSeconds) in D:\BuildAgent\_work\16\s\Core\Originals\WebDriver.cs:line 338 at g.TestAutomation.PageObjects.PageObjects.L.LLandingPage.SelectSecondarySearch(String searchType) in c:\jenkins\workspace\Dev\g\TestAutomation\g_L_Specflow_Build\g.TestAutomation.PageObjects\PageObjects\L\LLandingPage.cs:line 707 at LexisL.TestAutomation.L.Spec.Specs.StepDefinition.NavigationSteps.GivenTheUserIsOnSecondLevelPage(String searchLevel) in c:\jenkins\workspace\Dev\g\TestAutomation\g_L_Specflow_Build\g.TestAutomation.L.Specs\Specs\L\StepDefinitions\LNavigationSteps.cs:line 23 at lambda_method(Closure , IContextManager , String ) at TechTalk.SpecFlow.Bindings.BindingInvoker.InvokeBinding(IBinding binding, IContextManager contextManager, Object[] arguments, ITestTracer testTracer, TimeSpan& duration) at TechTalk.SpecFlow.Infrastructure.TestExecutionEngine.ExecuteStepMatch(BindingMatch match, Object[] arguments) at TechTalk.SpecRun.SpecFlowPlugin.Runtime.RunnerTestExecutionEngine.ExecuteStepMatch(BindingMatch match, Object[] arguments) at TechTalk.SpecFlow.Infrastructure.TestExecutionEngine.ExecuteStep(IContextManager contextManager, StepInstance stepInstance) at TechTalk.SpecFlow.Infrastructure.TestExecutionEngine.OnAfterLastStep() at TechTalk.SpecFlow.TestRunner.CollectScenarioErrors() at g.TestAutomation.L.Specs.Specs.L.Features.LLDisableTransactionalAccessUsersFeature.ScenarioCleanup() at g.TestAutomation.L.Specs.Specs.L.Features.LLDisableTransactionalAccessUsersFeature.DTAUser_PowerSearch_Filters_LegalContentResults(String hLCTList, String[] exampleTags) in c:\jenkins\workspace\Dev\g\TestAutomation\g_L_Specflow_Build\g.TestAutomation.L.Specs\Specs\L\Features\LDisableTransactionalAccess.feature:line 44 at g.TestAutomation.L.Specs.Specs.L.Features.LLDisableTransactionalAccessUsersFeature.DTAUser_PowerSearch_Filters_LegalContentResults_CasesStatutesAndLegislationAdministrativeAndAgencyMaterialsAdministrativeCodesAndRegulationsLawReviewsAndJournals() in c:\jenkins\workspace\Dev\g\TestAutomation\g_L_Specflow_Build\g.TestAutomation.L.Specs\Specs\L\Features\LDisableTransactionalAccess.feature:line 34 at TechTalk.SpecRun.Framework.TaskExecutors.StaticOrInstanceMethodExecutor.ExecuteInternal(ITestThreadExecutionContext testThreadExecutionContext) at TechTalk.SpecRun.Framework.TaskExecutors.StaticOrInstanceMethodExecutor.Execute(ITestThreadExecutionContext testThreadExecutionContext) at TechTalk.SpecRun.Framework.TestAssemblyExecutor.ExecuteTestNodeTask(TestNode testNode, ITaskExecutor task, TraceEventType eventType) stdout: Given the user is a DTA user with "legal" content -> done: LDTAuserSteps.GivenTheUserIsADTAUserWithContent("legal") (0.0s) -> [Fatal] Timed out after 30 seconds -> [Fail] Current Web Driver URL: https://cdc1-signin.lexisL.com/lnaccess/AuthzDenied?aci=la -> [Fail] Screen shot saved to: ScreenShots/LLDisableTransactional/DTAUserPowerSearchFilters_20210708_023332.png -> [Fatal] Timed out after 30 seconds -> [Fail] Current Web Driver URL: https://cdc1-signin.lexisL.com/lnaccess/AuthzDenied?aci=la -> [Fail] Screen shot saved to: ScreenShots/LLDisableTransactional/TransactionalUserPowerSearchLegalContent_20210708_023333.png -> [Fatal] Timed out after 30 seconds -> [Fail] Current Web Driver URL: https://cert1-advance.lexis.com/search/?pdmfid=1519360&crid=32014007-3588-4f08-ac9f-b1dea68bde86&pdsearchterms=crime&pdstartin=urn%3Ahlct%3A16&pdcaseshlctselectedbyuser=false&pdtypeofsearch=searchboxclick&pdsearchtype=SearchBox&pdoriginatingpage=BisLPowerSearch&pdqttype=or&pdpsf=urn%3Ahlct%3A16&pdquerytemplateid=&indexsearch=false&ecomp=_x7hkkk&earg=pdpsf&prid=4739fc21-707f-4e2f-a94f-8858f7c2363b -> [Fail] Screen shot saved to: ScreenShots/LLNewsletter/CreatenewsletterAddtextblockCancelSave_20210708_023335.png testname: DTA User-Power Search-Filters-Legal Content Results, Cases;Statutes and Legislation;Administrative and Agency Materials;Administrative Codes and Regulations;Law Reviews and Journals uniquename: TestAutomation.Specs.L/LDsUsers.#()::TestAssembly:g.TestAutomation.L.DTA User-Power Search-Filter } } ] tests: 25 time: 1800.8512 total: 25 } user: (timer) }
added three new Windows Indexers to my already three node index cluster and the new ones show up on the Peers page but not on the Instances page on the Cluster Master, I am new to Splunk so what did ... See more...
added three new Windows Indexers to my already three node index cluster and the new ones show up on the Peers page but not on the Instances page on the Cluster Master, I am new to Splunk so what did I do wrong on cluster Master go to Settings \ DISTRIBUTED ENVIRONMENT \ Indexer Clustering - and here I can see all six indexers however when I go to Settings  \ Monitor Console \ I only see the old three  
I have to assume this has been asked over and over but I can't seem to find it. If  I use inputs..conf on my indexer to block specific event id's, do those filtered events count against my Splunk li... See more...
I have to assume this has been asked over and over but I can't seem to find it. If  I use inputs..conf on my indexer to block specific event id's, do those filtered events count against my Splunk license?
I have some of the dashboards showing information and some do not.  Currently I'm working on getting the global protect dashboard to show information. While I do see global protect listed in some of... See more...
I have some of the dashboards showing information and some do not.  Currently I'm working on getting the global protect dashboard to show information. While I do see global protect listed in some of the log files while looking at Pan:system; I do not see "log_subtype="globalprotect"". I do see the following log subtypes: vpn general auth userid url-filtering   I unfortunately have no idea how to tell the system how to parse the data for globalprotect.   Ian
File Monitor configured - but nothing is indexing ? here is my inputs.conf  [monitor://C:\xxxx\xxxxxx\xxxxxxx\xxxxx.docx] [monitor://C:\xxxxx\xxxxxxx\xxxxxx.docx] disabled = 0 index=file_integri... See more...
File Monitor configured - but nothing is indexing ? here is my inputs.conf  [monitor://C:\xxxx\xxxxxx\xxxxxxx\xxxxx.docx] [monitor://C:\xxxxx\xxxxxxx\xxxxxx.docx] disabled = 0 index=file_integrity_monitoring sourcetype=test crcSalt=<SOURCE> following the article below in our Splunk Cloud environment.   https://docs.splunk.com/Documentation/Splunk/8.2.1/Data/Monitorfilesanddirectorieswithinputs.conf Any idea what is missing ? 
Hi I have some process that does not finish successfully, Now i want to trace them with splunk. here is scenario: I have wildfly that create log file. when I start wildfly this code WFLYSRV0025 ap... See more...
Hi I have some process that does not finish successfully, Now i want to trace them with splunk. here is scenario: I have wildfly that create log file. when I start wildfly this code WFLYSRV0025 appear in the log. now I want after latest time that whildfly started, splunk start to count number of "input" and "output" like below. “Input” mean new processes start “need to store count of previous value then it always increasing ”(streamstats sum(count)) “output” means process has been finished “need to store count of previous value, this value subtract from input continuously ” (streamstats sum(count)) The goal is splunk tell me how much process still not finish. and show this on timechart.   here is the log: 2021-07-06 23:10:47,131 INFO [as] WFLYSRV0025: Wildfly EAP 7.0.0.GA 2021-07-06 23:11:12,197 INFO [app] input , time[10] User: anonymous 2021-07-06 23:11:12,187 INFO [app] output, User: anonymous 2021-07-06 23:11:12,178 INFO [app] input , time[10] User: anonymous 2021-07-06 23:11:12,167 INFO [app] output, User: anonymous 2021-07-06 23:11:12,159 INFO [app] input , time[10] User: anonymous 2021-07-06 23:11:12,149 INFO [app] output, User: anonymous 2021-07-06 23:11:12,141 INFO [app] input , time[10] User: anonymous 4 input, 3 output In above log as you see 1 input still remain, and not finished 2021-07-06 23:30:47,131 INFO [as] WFLYSRV0025: Wildfly EAP 7.0.0.GA 2021-07-06 23:30:47,197 INFO [app] input , time[10] User: anonymous 2021-07-06 23:30:47,141 INFO [app] input , time[10] User: anonymous 2021-07-06 23:30:47,131 INFO [app] input , time[10] User: anonymous 2021-07-06 23:30:47,134 INFO [app] output, User: anonymous 2021-07-06 23:30:47,138 INFO [app] output, User: anonymous 2021-07-06 23:30:47,131 INFO [app] input , time[10] User: anonymous 2021-07-06 23:30:47,131 INFO [app] input , time[10] User: anonymous 5 input, 2 output In above log as you see 3 input still remain, and not finished I want to show this on timechart, in each minute that show me how many input still there. Any idea, Thanks
Hi have a report that is sent of a daily basis.  The report provides a count for every one hour bucket. Sometimes  get 0s for a few of those hourly buckets. Instead of 0s being reported I would like ... See more...
Hi have a report that is sent of a daily basis.  The report provides a count for every one hour bucket. Sometimes  get 0s for a few of those hourly buckets. Instead of 0s being reported I would like for my query to replace the 0s with data from my lookup file. index=xxx sourcetype=xxx | fields success_count | stats sum(success_count) as success_count by _time | bin _time span=1h| stats max(success_count) as max_count by _time | makecontinuous | fillnull value=0 | inputlookup append=t "app_nullentries.csv"                                                                                                                                  | eval max_count=case(max_count="0" and !isnull(max_login_value), max_login_value, 1=1, max_login) I need help getting my query to run the logic: if max_count=0, than get data from the the app_nullentries.csv and replace 0s with what is stated in the file. The file has the exact Date and time , and what value to replace the 0s with.  
Prior to a customer getting splunk, they 7-zipped there logs and copied them to a server.  I just got them a brand new Splunk 8.2.1 enterprise system stood up with awesome dashboards.    Now the cust... See more...
Prior to a customer getting splunk, they 7-zipped there logs and copied them to a server.  I just got them a brand new Splunk 8.2.1 enterprise system stood up with awesome dashboards.    Now the customer is asking if I can import there archived .evtx files.  Do we do this by just putitng them in a folder and using monitor to point at them?
Scenario:  Two large organizations with two separate Splunk implementations.  Org A acquires Org B and in a consolidation effort they'd like to consolidate their search heads and search 2 indexer clu... See more...
Scenario:  Two large organizations with two separate Splunk implementations.  Org A acquires Org B and in a consolidation effort they'd like to consolidate their search heads and search 2 indexer clusters. What are some approaches to this?  One caveat is both Org A and Org B have some overlapping index names (ie both have index=network). Is it possible to give a role a "default" cluster, so anytime OrgA user searches, they default to OrgA, BUT can be overridden by specifying splunk_server_group=OrgB or splunk_server_group=* ?  
I'd like to create an app with dashboards that rely on iframes.  To do this in Splunk Enterprise I know that the app's web.conf has to include the following:     [settings] dashboard_html_allow_em... See more...
I'd like to create an app with dashboards that rely on iframes.  To do this in Splunk Enterprise I know that the app's web.conf has to include the following:     [settings] dashboard_html_allow_embeddable_content=true     However, I'd like to use this app with a Splunk Cloud instance and when I run splunk-appinspect (with --mode precert --included-tags cloud) against the app I encounter the following failure:     Web.conf File Standards Ensure that web.conf is safe for cloud deployment and that any exposed patterns match endpoints defined by the app - apps should not expose endpoints other than their own. Including web.conf can have adverse impacts for cloud. Allow only [endpoint:*] and [expose:*] stanzas, with expose only containing pattern= and methods= properties. web.conf Check that web.conf only defines [endpoint:] and [expose:] stanzas, with [expose:*] only containing pattern= and methods=. FAILURE: Only the [endpoint:*] and [expose:*] stanzas are permitted in web.conf for cloud. Please remove this stanza from web.conf: [settings]. File: default/web.conf Line Number: 1     Is there any way to enable embedded content for dashboards using iframes for apps in Splunk Cloud that will pass the splunk-appinspect validation? 
Hi,   Looking at TrackMe to monitor inputs on our Heavy Forwarders. Looking at the UI, the Data Source Tracking would give me all I need IF it has a host listed. My scenario is, we have over 10 He... See more...
Hi,   Looking at TrackMe to monitor inputs on our Heavy Forwarders. Looking at the UI, the Data Source Tracking would give me all I need IF it has a host listed. My scenario is, we have over 10 Heavy Forwarders pushing multiple sourcetypes with multiple indexes to our Indexers. When one "data_name" is in error, I would like to know which Heavy Forwarder to look at to further troubleshoot. It would also be great if I could just sort by host on the main page, maybe use tags that are a host name? I couldn't see a way to meet my requirments.    Any suggestions?   Thank you, Chris
I am running itsi 4.9.2 and Splunk 8.1.2.  ITSI is not generating notable events because of this error.  My correlation searches find notable events but they do not get put anywhere because of this e... See more...
I am running itsi 4.9.2 and Splunk 8.1.2.  ITSI is not generating notable events because of this error.  My correlation searches find notable events but they do not get put anywhere because of this error. I am on a disconnected network so I can't type it all: WARN sendmodalert - action=itsi_event_generator - Alert action script returned error code 255. INFO sendmodalert - action=itsi_event_generator - Alert action script completed in duration=1134ms with exit code 255. There are a bunch of python errors ,most interesting: ERROR sendmodalert - action=itsi_event_generator - STDERR - SA_ITOA_app_common.solnlib.packages.requests.exceptions.SSLError: HTTPSConnectionPool(127.0.0.1,port=8088): MAx retries exceeded with url: /services/collector (Caused by SSLError(SSLError(1, '[SSL:UNKNOWN_PROTOCOL] unknow protocol (_ssl.c:1106)'))) Not sure when this first started but any help is appreciated!   Thanks!      
Hello We are seeing multiline issue for the Traceback logs in Splunk Cloud. We don’t want to split the Traceback log into multiple events and we wanted these events to be concatenated as one event u... See more...
Hello We are seeing multiline issue for the Traceback logs in Splunk Cloud. We don’t want to split the Traceback log into multiple events and we wanted these events to be concatenated as one event under Traceback as shown in the below log. Can someone please help us in fix this issue.   Splunk Events: Log:  
I'm looking for records that have a "user_email" field defined and not equal to "unauthenticated"   How do I do this:   search index=xyz sourcetype=abc (NOT user_email=unauthenticated AND user_em... See more...
I'm looking for records that have a "user_email" field defined and not equal to "unauthenticated"   How do I do this:   search index=xyz sourcetype=abc (NOT user_email=unauthenticated AND user_email=*) This does not appear to be working - I get loads of records with no user_email field defined?
Hi,   I have a Splunk Enterprise(8.1.0) account setup through my company. I am able to login to it online. But how do i set/install this Enterprise account  onto my local system(Windows)? I have pr... See more...
Hi,   I have a Splunk Enterprise(8.1.0) account setup through my company. I am able to login to it online. But how do i set/install this Enterprise account  onto my local system(Windows)? I have pretty much used Splunk on the web so I am quite new on this setup/install part on my local machine
Can someone help me with th regex to put in props.conf file to mask the data as below . Except the first three letter , remaing to be masked .  Org_name="ethanfurniturelimited" Org_name="david" re... See more...
Can someone help me with th regex to put in props.conf file to mask the data as below . Except the first three letter , remaing to be masked .  Org_name="ethanfurniturelimited" Org_name="david" required output  Org_name="ethxxxxxxxxxxxxxxxxxx" Org_name="davxxx"    
Hello, On a monoinstance Splunk, I'd like to ingest some simple JSON data :   { GDH: 2021-07-08 16:54:00.617222 action: )reV[viZpy)4noHQFhs7;)*!wHlRaY3mo4R(o6, dossier: FR668CORG2021078... See more...
Hello, On a monoinstance Splunk, I'd like to ingest some simple JSON data :   { GDH: 2021-07-08 16:54:00.617222 action: )reV[viZpy)4noHQFhs7;)*!wHlRaY3mo4R(o6, dossier: FR668CORG2021078979348557 id: 4000000 ident: 267987 ip: 10.226.689.32 org: PN service: 3647971 telephone: +33672108802 }   I'd like to use only KV_mode, without indexed_extractions = json. Here's my sourcetype :   [data_kvm_json] DATETIME_CONFIG = KV_MODE = LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = false TIMESTAMP_FIELDS = GDH TIME_FORMAT = %Y-%m-%d %H:%M:%S.%6N category = Structured description = sourcetype - kv_mode extraction disabled = false pulldown_type = true NO_BINARY_CHECK = true   Here's the result :               The event is indexed at the time of the ingestion, not the event date wich is is GDH field.   I have several sourcetypes on another environnement (clustered IDX + SH), where this positionned in props.conf on indexer cluster works fine. Is this a consequence of the architecture being only a mono-instance ? What did I miss ? Thanks, Regards, Eglantine
Hello, I made a mistake during on migration on data source. I moved from csv format to json. Suppose the migration date is day A. On that day, I have in my props.conf (the one on the indexer clust... See more...
Hello, I made a mistake during on migration on data source. I moved from csv format to json. Suppose the migration date is day A. On that day, I have in my props.conf (the one on the indexer cluster) [toto] INDEXED_EXTRACTIONS = json When I looked at the result on the Search Cluster, the field where displayed twice. I missed the props.conf on the SHC saying : [toto] KV_MODE = json.   So on day B : I rolled back => and deleted the "INDEXED_EXTRACTIONS" from the props.conf file on the IDX cluster.   Since day B : results are perfectly fine.   BUT : When I look at events between A and B period => the fields are displayed twice. I need to keep the KV_MODE on, because otherwise, I cannot extract any data when searching (no extraction made at index time before day A and after day B). As a results, all calculus using part of period between A and B are false. I even get percentage > 100%.   Question is : - do you have any idea how to fix this so the results of the splunk command will be ok (I can't believe I'm the only one to face this wall). - is there any way to delete the extracted fields withour deleting (masking) the data ?   Thanks everyone, Regards, Ema