All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Just to clarify the discussion I see here, everything under /opt/phantom should be owned by the phantom user. If any of the folders are owned by the root user instead of the phantom, SOAR may not run... See more...
Just to clarify the discussion I see here, everything under /opt/phantom should be owned by the phantom user. If any of the folders are owned by the root user instead of the phantom, SOAR may not run (or install in this case) properly. This is mentioned in the installation instructions but it's a single line toward the bottom and easy to miss. "Make sure you are logged in as the user meant to own the Splunk SOAR (On-premises) installation. Do not perform the installation command as the root user." Given how early you are in the process, it might just be best to start fresh rather than changing permissions on every folder.
Hello, I´ve adjusted my query following: | bin span=3h _time | stats values(uptime) AS Uptime BY _time, component_hostname Like this I will get all Uptimes listed in a span of 3hours by com... See more...
Hello, I´ve adjusted my query following: | bin span=3h _time | stats values(uptime) AS Uptime BY _time, component_hostname Like this I will get all Uptimes listed in a span of 3hours by component_hostname. See table _time component_hostname uptime 2024-11-11 15:00 router   0.00000 1.00000 5.00000 You can see there are results which do include different uptimes e.g. 0..., 1.... or 5.... Now I would like to create an Alert so that it will display only component_hostname which had no different uptime expect of 0 for 1 day. Thank you
Hey Giuseppe, Thanks so much for the reply! That also doesn't seem to work, when I add it I get `Error in 'mstats' command: This command must be the first command of a search.`, I guess I should hav... See more...
Hey Giuseppe, Thanks so much for the reply! That also doesn't seem to work, when I add it I get `Error in 'mstats' command: This command must be the first command of a search.`, I guess I should have mentioned that I was using mstats, I didn't totally realize that it had special rules. That might also be why eval isn't working as expected.
@dural_yyz  any option
Hi Team, Below is my raw log I want to fetch 38040 from log please guide ArchivalProcessor - Total records processed - 38040
As a general rule, you should _always_ create separate certificates for separate entities (in your case - for separate components). Also remember that if you decide to enable client authentication, ... See more...
As a general rule, you should _always_ create separate certificates for separate entities (in your case - for separate components). Also remember that if you decide to enable client authentication, certificate must be issued with proper key usage.
You can't do it on your own. You might be able to work with Splunk sales team on that.
2024-11-12 12:12:28.000,REQUEST="{"body":"<n1:Request xmlns:ESILib=\"http:/abcs/v1\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:n1=\"http://www.shaw.ca/esi/schema/product/inventory... See more...
2024-11-12 12:12:28.000,REQUEST="{"body":"<n1:Request xmlns:ESILib=\"http:/abcs/v1\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:n1=\"http://www.shaw.ca/esi/schema/product/inventoryreservation_create/v1\" xsi:schemaLocation=\"http://www.shaw.ca/esi/schema/product/inventoryreservation_create/v1 FES_InventoryReservation_create.xsd\"><n1:inventoryReservationCreateRequest><n1:brand>xyz</n1:brand><n1:channel>ABC</n1:channel><n1:bannerID>8669</n1:bannerID><n1:location>WD1234</n1:location><n1:genericLogicalResources><n1:genericLogicalResource><ESILib:skuNumber>194253408031</ESILib:skuNumber><ESILib:extendedProperties><ESILib:extendedProperty><ESILib:name>ReserveQty</ESILib:name><ESILib:values><ESILib:item>1</ESILib:item></ESILib:values></ESILib:extendedProperty></ESILib:extendedProperties></n1:genericLogicalResource></n1:genericLogicalResources></n1:inventoryReservationCreateRequest></n1:Request> how to retrieve the banner ID and location from the above using splunk query. index="abc" sourcetype="oracle:transactionlog" OPERATION ="/service/v1/inventory/reservation" |rex "REQUEST=\"(?<REQUEST>.+)\", RESPONSE=\"(?<RESPONSE>.+)\", RETRYNO" |spath input=REQUEST |spath input=REQUEST output=Bannerid path=body.n1:Request{}.n1:bannerID |table Bannerid I used the above query but it didnot yeild any results
Thank you @dural_yyz for your prompt response and for providing the documentation. However, I need further assistance regarding the SSL certificates that need to be generated for my Splunk environmen... See more...
Thank you @dural_yyz for your prompt response and for providing the documentation. However, I need further assistance regarding the SSL certificates that need to be generated for my Splunk environment. Could you please clarify whether I need to generate a separate certificate for each component (e.g., search head, indexers, forwarders, etc.)? Additionally, do I need to create different certificates for the various connections between these components?
In my air gapped lab, I got 5GB Splunk license but hardly using 1GB. Within the lab, we are working to have a smaller lab that will be on a separate network, won't be talking to other lab. We are to ... See more...
In my air gapped lab, I got 5GB Splunk license but hardly using 1GB. Within the lab, we are working to have a smaller lab that will be on a separate network, won't be talking to other lab. We are to deploy Splunk in the new lab. How can I break the 5GB license in to 3GB and 2GB, so I can use that 2GB into a new smaller lab?
@dural_yyz  tried but not working
Hello everyone, I'm having an issue that I'm trying to understand and fix.  I have a Dashboard table that displays the last 24 hrs of events.  However, the event _time is always showing 11 min past ... See more...
Hello everyone, I'm having an issue that I'm trying to understand and fix.  I have a Dashboard table that displays the last 24 hrs of events.  However, the event _time is always showing 11 min past the hour like:   Which these aren't the correct event times.  When I run the exact same search manually, I get the correct event times.   Does anyone know why this is occurring and how I can fix it? Thanks for any help on this one, much appreciated. Tom
If HP Nimble Storage solution try: https://splunkbase.splunk.com/app/2840 Other wise I don't see any TA's on Splunkbase for HP storage.
https://lantern.splunk.com/Splunk_Platform/Product_Tips/Administration/Securing_the_Splunk_platform_with_TLS   These articles can explain it much better than I can and it is coming straight from th... See more...
https://lantern.splunk.com/Splunk_Platform/Product_Tips/Administration/Securing_the_Splunk_platform_with_TLS   These articles can explain it much better than I can and it is coming straight from the source.
You only have a sort on Business Date but you never say to sort on Start Time as well.  In fact the field Start Time is evaluated after the sort is done.  If you want a sort it should be done after b... See more...
You only have a sort on Business Date but you never say to sort on Start Time as well.  In fact the field Start Time is evaluated after the sort is done.  If you want a sort it should be done after both fields are available in a sortable format.   | sort "Business_Date" "StartTime"
Hi, I have incoming data from 2 Heavy Forwarders. Both of forward HEC data and the internal logs, how do I identify which HF is sending a particular HEC data?   Regards, Pravin
Hi Team, I have below panel query I want to sort on the basis of busdate and start time, But results are not coming correct.Could anyone guide on this Currently its sorting on bus date but no t s... See more...
Hi Team, I have below panel query I want to sort on the basis of busdate and start time, But results are not coming correct.Could anyone guide on this Currently its sorting on bus date but no t start time. Please guide index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log""StatisticBalancer - statisticData: StatisticData" "CARS.UNB."|rex "totalOutputRecords=(?&lt;totalOutputRecords&gt;),busDt=(?&lt;busDt&gt;),fileName=(?&lt;fileName&gt;),totalAchCurrOutstBalAmt=(?&lt;totalAchCurrOutstBalAmt&gt;),totalAchBalLastStmtAmt=(?&lt;totalAchBalLastStmtAmt&gt;),totalClosingBal=(?&lt;totalClosingBal&gt;),totalRecordsWritten=(?&lt;totalRecordsWritten&gt;),totalRecords=(?&lt;totalRecords&gt;)"|eval totalAchCurrOutstBalAmt=tonumber(mvindex(split(totalAchCurrOutstBalAmt,"E"),0)) * pow(10,tonumber(mvindex(split(totalAchCurrOutstBalAmt,"E"),1)))|eval totalAchBalLastStmtAmt=tonumber(mvindex(split(totalAchBalLastStmtAmt,"E"),0)) * pow(10,tonumber(mvindex(split(totalAchBalLastStmtAmt,"E"),1)))|eval totalClosingBal=tonumber(mvindex(split(totalClosingBal,"E"),0)) * pow(10,tonumber(mvindex(split(totalClosingBal,"E"),1)))|table busDt fileName totalAchCurrOutstBalAmt totalAchBalLastStmtAmt totalClosingBal totalRecordsWritten totalRecords|sort busDt|appendcols[search index="abc"sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" | rex "CARS\.UNB(CTR)?\.(?&lt;CARS_ID&gt;\w+)" | transaction CARS_ID startswith="Reading Control-File /absin/CARS.UNBCTR." endswith="Completed Settlement file processing, CARS.UNB." |eval StartTime=min(_time)|eval EndTime=StartTime+duration|eval duration_min=floor(duration/60) |rename duration_min as CARS.UNB_Duration| table StartTime EndTime CARS.UNB_Duration]| fieldformat StartTime = strftime(StartTime, "%F %T.%3N")| fieldformat EndTime = strftime(EndTime, "%F %T.%3N")|appendcols[search index="600000304_d_gridgain_idx*" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "FileEventCreator - Completed Settlement file processing" "CARS.UNB."|rex "FileEventCreator - Completed Settlement file processing, (?&lt;file&gt;[^ ]*) records processed: (?&lt;records_processed&gt;\d+)"| rename file as Files|rename records_processed as Records| table Files Records]|appendcols[search index="600000304_d_gridgain_idx*" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully"| head 7 | eval True=if(searchmatch("ebnc event balanced successfully"),"✔","") | eval EBNCStatus="ebnc event balanced successfully" | table EBNCStatus True]|rename busDt as Business_Date|rename fileName as File_Name|rename CARS.UNB_Duration as CARS.UNB_Duration(Minutes)|table Business_Date File_Name StartTime EndTime CARS.UNB_Duration(Minutes) Records totalClosingBal totalRecordsWritten totalRecords EBNCStatus
Hello, I have a distributed Splunk architecture with a single search head, two indexers, and management tier : License Master, Monitoring Console, and Deployment Server, in addition to the forwarder... See more...
Hello, I have a distributed Splunk architecture with a single search head, two indexers, and management tier : License Master, Monitoring Console, and Deployment Server, in addition to the forwarders. SSL has already been configured for the web interfaces, but I would now like to secure the remaining components and establish SSL-encrypted connections between them as well. The certificates we are using are self-generated. Could you please guide me on how to proceed with securing all internal communications in this setup? Specifically, I would like to know if I should auto-generate a new certificate for each component and each connection or if there’s an efficient way to manage SSL across the entire environment. Thank you in advance for your help!
That's helpful to understand the problem! Still I don't fully understand the solusion. What are my options if I want to expose interval to client and still keep application single instance?
Hei, We have onboarded data from HP Storage  and I am not sure if there is any TA for this technology or how to extract properly the fields from the logs and then to map them in Data Model. I have m... See more...
Hei, We have onboarded data from HP Storage  and I am not sure if there is any TA for this technology or how to extract properly the fields from the logs and then to map them in Data Model. I have many logs there and I'm confused.     Thank you in advance.