All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @myappdy.mympco, I did some searching for similar issues and found this. The same errors were seen in the DB Agent and controller logs due to a couple of entries related to the DB agent being... See more...
Hi @myappdy.mympco, I did some searching for similar issues and found this. The same errors were seen in the DB Agent and controller logs due to a couple of entries related to the DB agent being missing in the entity_relationship table.
I am attempting to use a lookup to feed some UNC file paths into a dashboard search, but I am getting tripped by all the escaping of the backslashes and double quites in my string. I want to call a ... See more...
I am attempting to use a lookup to feed some UNC file paths into a dashboard search, but I am getting tripped by all the escaping of the backslashes and double quites in my string. I want to call a field from a lookup with something like this as the actual value: file_path="\\\\*\\branch\\system\\type1\\*" OR file_path="\\\\*\\branch\\system\\type2\\*" I want to populate a field in my lookup table with actual key/value pairs and output the entire string based on a menu selection.  Unfortunately, if I try this, Splunk escapes all the double quotes and all the backslashes and it ends up looking like this in the litsearch, which is basically useless: file_path=\"\\\\\\\\*\\\\branch\\\\service\\\\type1\\\\*\" OR file_path=\"\\\\\\\\*\\\\branch\\\\service\\\\type2\\\\*\" How can I either properly escape the value within the lookup table so this doesn't happen, or is there any way to get Splunk to output the lookup value as a literal string and not try to interpret it?
Is it a single search head or search head cluster? - No cluster all single search heads Then really it should just be a user migration to the new SH that is running side by side from a backup as ... See more...
Is it a single search head or search head cluster? - No cluster all single search heads Then really it should just be a user migration to the new SH that is running side by side from a backup as descibed. With them being singletons, theres already a reduced amount of redundancy or expectation of no disruption, so should just be an organized migration and shut down.  Have the indexers already been migrated to Azure? - No indexers have been migrated yet So will the Search Heads migrated be searching back to onprem when cutover is done? if so, as mentioned, you need to have the networking right, so another reason to build the sh in azure along side existing, confirm search and comms work.  Will the SH(C) be searching Azure or On-Prem Indexers as well? - SH will be searching Azure indexers So when you build the Azure SH, they will be searching net new indexers in Azure? Do they need to Search Onprem too? Just gotta nail the config and networking.  What "components" do you rely on most on this SH(C)? Premium apps like ES or ITSI? or just Splunk Enterprise apps? - SH will rely on both premium apps and Splunk Enterprise apps Well, ES and ITSI are their own beasts, see documentation for those. Enterprise apps will depend on what needs to persist and be migrated.  Definitely ideal to involve experienced services folks or experienced Splunk Admins. Either way it is basically a build new SH, copy configs over, validate, migrate users over. All the traps you may encounter along the way should mostly be resolved in the standing up of the new SH and then getting configs running along side your existing.  This is nuanced and is why you wont really find a 1:1 migration guide, cause with Splunk, "it depends".  The amount of disruption will be mitigated by having good understanding of the major workloads running in the environments (montioring console can help with by app breakdowns) and what needs to be carried over to the new enviro and what can/needs to be cleaned up to reduce work needed in migration.
Not sure what I am doing wrong.  I have a datamodel with a dataset that I can pivot on a field when using the datamodel explorer.  When I try to use |tstats it does not work. I get results as expe... See more...
Not sure what I am doing wrong.  I have a datamodel with a dataset that I can pivot on a field when using the datamodel explorer.  When I try to use |tstats it does not work. I get results as expected with  | tstats count as order_count from datamodel=spc_orders however if I try and pivot | tstats count as order_count from datamodel=spc_orders where state="CA" 0 results. Whats going on here?
With the above request and response can u telme how we can retrieve the bannerID and location using splunk query
@PickleRick  @ITWhisperer  when I am putting this sort "Business_Date" "StartTime" Its only sorting on Business_Date and not startTime Could you please suggest
This event doesn't appear to have a REQUEST. Splunk SPL works on a pipeline of events, effectively processing each event one at a time. Usually, with request and response log events, you need to find... See more...
This event doesn't appear to have a REQUEST. Splunk SPL works on a pipeline of events, effectively processing each event one at a time. Usually, with request and response log events, you need to find a way to correlate the response with the request.
I do have a RESPONSE field as well in the API RESPONSE="{"body":"<?xml version=\"1.0\" encoding=\"UTF-8\"?><fes:Response xmlns:fes=\"http://www.abc/product/inventoryreservation_create/v1\"><fes:inve... See more...
I do have a RESPONSE field as well in the API RESPONSE="{"body":"<?xml version=\"1.0\" encoding=\"UTF-8\"?><fes:Response xmlns:fes=\"http://www.abc/product/inventoryreservation_create/v1\"><fes:inventoryReservationCreateResponse><fes:reservationId>fd19244445edb18</fes:reservationId><fes:requestStatus>Success</fes:requestStatus><fes:requestState>Order Reserved</fes:requestState></fes:inventoryReservationCreateResponse></fes:Response>","headers":{"content-type":"text/xml;charset=utf-8","accept":"application/xml,application/fastinfoset","server":"Jetty(9.4.27.v20200227)","uritemplate":"/service/v1/inventory/reservation","operationname":"CREATE_RESERVATION","method":"POST","url":"http://192.123/service/v1/inventory/reservation","x_shaw_request_tracing":"location_id","singularityheader":"appId=60*ctrlguid=1730261321*acctguid=602406e5-b988-4764-be9d-e041209f6ed8*ts=1731413516129*btid=40467*snapenable=true*donotresolve=true*guid=a61228ec-2eed-4ec7-b2eb-1e0ebb10ad65*exitguid=1|3|17*unresolvedexitid=13486*cidfrom=649,{[UNRESOLVED][17715]},648,{[UNRESOLVED][18213]},689*etypeorder=HTTP,HTTP,HTTP,HTTP,HTTP*esubtype=HTTP,HTTP,HTTP,HTTP,HTTP*cidto={[UNRESOLVED][17715]},648,{[UNRESOLVED][18213]},689,{[UNRESOLVED][13486]}","asyncreplyfordestinaton":"Svc-REST.DIRECTFULFILLMENT.CreateInventoryReservation:PROCESS","x_shaw_service_orchestration_id":"Id-ebcc8a602f57c17646182490","environment":"prod","final_match_group":"/","x_shaw_onbehalfof_id":"CREATE","directfulfillment.reservationid":"fd19244445edb18","lg_header":"Interaction=IwDMcZ3MDAZ5okkgkwEJDMgK;Locus=uWm7UBiog5Kb3BmVyz1/dA==;Flow=4geEzEzItMPK3CMgkwEODMgK;Chain=IQDMcZ3MDAZ5okkgkwEJDMgK;UpstreamOpID=eMsPL0LlEOcPDTl5JMfY6Q==;CallerAddress=tossbprd1app03.fcc.bss.globalivewireless.local;","content-length":"380"}}",
1. Just saying "not working" doesn't say anything. We have no idea what the results should look like, what they actually look like, what data you have and so on. 2. Apart from your main question I s... See more...
1. Just saying "not working" doesn't say anything. We have no idea what the results should look like, what they actually look like, what data you have and so on. 2. Apart from your main question I see another issue woth your search - you sort first, then add some data with appendcols. Are you absolutely sure that you get right data in right places? 3. And finally, if you post SPL code please do so as either code block (the </> symbol at the top of the text-edit widget) or as a preformatted style so that it doesn't get butchered into this unreadable blob of text.
Your sample event does not include "RESPONSE" so the rex will not be able to extract the REQUEST field
| rex "Total records processed - (?<processed>\d+)"
| bin span=3h _time | stats values(uptime) AS Uptime BY _time, component_hostname | where Uptime=0
Just to clarify the discussion I see here, everything under /opt/phantom should be owned by the phantom user. If any of the folders are owned by the root user instead of the phantom, SOAR may not run... See more...
Just to clarify the discussion I see here, everything under /opt/phantom should be owned by the phantom user. If any of the folders are owned by the root user instead of the phantom, SOAR may not run (or install in this case) properly. This is mentioned in the installation instructions but it's a single line toward the bottom and easy to miss. "Make sure you are logged in as the user meant to own the Splunk SOAR (On-premises) installation. Do not perform the installation command as the root user." Given how early you are in the process, it might just be best to start fresh rather than changing permissions on every folder.
Hello, I´ve adjusted my query following: | bin span=3h _time | stats values(uptime) AS Uptime BY _time, component_hostname Like this I will get all Uptimes listed in a span of 3hours by com... See more...
Hello, I´ve adjusted my query following: | bin span=3h _time | stats values(uptime) AS Uptime BY _time, component_hostname Like this I will get all Uptimes listed in a span of 3hours by component_hostname. See table _time component_hostname uptime 2024-11-11 15:00 router   0.00000 1.00000 5.00000 You can see there are results which do include different uptimes e.g. 0..., 1.... or 5.... Now I would like to create an Alert so that it will display only component_hostname which had no different uptime expect of 0 for 1 day. Thank you
Hey Giuseppe, Thanks so much for the reply! That also doesn't seem to work, when I add it I get `Error in 'mstats' command: This command must be the first command of a search.`, I guess I should hav... See more...
Hey Giuseppe, Thanks so much for the reply! That also doesn't seem to work, when I add it I get `Error in 'mstats' command: This command must be the first command of a search.`, I guess I should have mentioned that I was using mstats, I didn't totally realize that it had special rules. That might also be why eval isn't working as expected.
@dural_yyz  any option
Hi Team, Below is my raw log I want to fetch 38040 from log please guide ArchivalProcessor - Total records processed - 38040
As a general rule, you should _always_ create separate certificates for separate entities (in your case - for separate components). Also remember that if you decide to enable client authentication, ... See more...
As a general rule, you should _always_ create separate certificates for separate entities (in your case - for separate components). Also remember that if you decide to enable client authentication, certificate must be issued with proper key usage.
You can't do it on your own. You might be able to work with Splunk sales team on that.
2024-11-12 12:12:28.000,REQUEST="{"body":"<n1:Request xmlns:ESILib=\"http:/abcs/v1\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:n1=\"http://www.shaw.ca/esi/schema/product/inventory... See more...
2024-11-12 12:12:28.000,REQUEST="{"body":"<n1:Request xmlns:ESILib=\"http:/abcs/v1\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:n1=\"http://www.shaw.ca/esi/schema/product/inventoryreservation_create/v1\" xsi:schemaLocation=\"http://www.shaw.ca/esi/schema/product/inventoryreservation_create/v1 FES_InventoryReservation_create.xsd\"><n1:inventoryReservationCreateRequest><n1:brand>xyz</n1:brand><n1:channel>ABC</n1:channel><n1:bannerID>8669</n1:bannerID><n1:location>WD1234</n1:location><n1:genericLogicalResources><n1:genericLogicalResource><ESILib:skuNumber>194253408031</ESILib:skuNumber><ESILib:extendedProperties><ESILib:extendedProperty><ESILib:name>ReserveQty</ESILib:name><ESILib:values><ESILib:item>1</ESILib:item></ESILib:values></ESILib:extendedProperty></ESILib:extendedProperties></n1:genericLogicalResource></n1:genericLogicalResources></n1:inventoryReservationCreateRequest></n1:Request> how to retrieve the banner ID and location from the above using splunk query. index="abc" sourcetype="oracle:transactionlog" OPERATION ="/service/v1/inventory/reservation" |rex "REQUEST=\"(?<REQUEST>.+)\", RESPONSE=\"(?<RESPONSE>.+)\", RETRYNO" |spath input=REQUEST |spath input=REQUEST output=Bannerid path=body.n1:Request{}.n1:bannerID |table Bannerid I used the above query but it didnot yeild any results