All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

How are the fields extracted?
What is the search?
we are trying to configure octopus deploy where data is sent via HEC and now i need to validate new logging locations in splunk to send logs...which are the logging locations to be considered..
Hello Splunkers!! I want to ingest below two pattern of events in Splunk and both are in json logs but there timestamp are different. So far I have used below attributes in my props.conf. Please let... See more...
Hello Splunkers!! I want to ingest below two pattern of events in Splunk and both are in json logs but there timestamp are different. So far I have used below attributes in my props.conf. Please let me know or suggest me if any any other attribute I need to add so my both the pattern of events parse smoothly without any time difference..   [exp_json] AUTO_KV_JSON = false DATETIME_CONFIG = INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true TIME_PREFIX = \"time\"\:\" category = Custom pulldown_type = true Pattern 1: {"datacontenttype":"application/json","data":{"identificationStatus":"NO_IDENTIFICATION_ATTEMPTED","location":"urn:topology:segment:1103.20.15-1103.20.19","carrierId":null,"trackingId":"dc268ac7-168a-11ef-b02a-1feae60bb414"},"subject":"CarrierPositionUpdate","messages":[],"specversion":"1.0","classofpayload":"com.vanderlande.conveyor.boundary.event.business.outbound.CarrierPositionUpdate","id":"8252fb03-2eb2-4619-a59b-24e3280f9bda","source":"conveyor","time":"2024-05-20T09:29:53.361800Z","type":"CarrierPositionUpdate"} Pattern 2: {"data":{"physicalId":"60040160041570014272","carrierTypeId":"18","carrierId":"60040160041570014272","prioritizedDestinations":[{"name":"urn:topology:location:Pallet Loop (DEP):OBD/Returnflow:Exit01","priority":1},{"name":"urn:topology:location:Pallet Loop (DEP):OBD/Returnflow:Exit02","priority":1}],"transportOrderId":"TO_00001399"},"topic":"transport-order-commands-conveyor","specversion":"1.0","time":"2024-05-22T18:02:16.669Z","id":"34A0DF56-B0B2-4A73-9D7B-034A94D49747","type":"AssignTransportOrder"} Thanks in advance!!
If I run a search query, there is no issue with raw events. From the Events tab, everything looks in perfect format and can't say that there is a Data quality issue in the events. Only when this is ... See more...
If I run a search query, there is no issue with raw events. From the Events tab, everything looks in perfect format and can't say that there is a Data quality issue in the events. Only when this is visualised from statistics tab I could see this. Also this is happening only with some events in the results set. I have attached the screenshot of the normal results and the results with Data Quality issue. Expected results with Request Id and other fields. But what it is displaying (Refer the highlighted rows)   Here is the event of one of the request ids where the key value pair is as expected format      
Hi @BRFZ , yes, you can manage not clustered SHs and IDXs, even if I don't like this. Ciao. Giuseppe
I checked fire wall setting and it works for me  Thank you 
Hi @Brenny, I encountered the same problem today and found the issue. In my case, the eventhub name wasn't the good one.  It should be the one located in the properties' name field under the Event... See more...
Hi @Brenny, I encountered the same problem today and found the issue. In my case, the eventhub name wasn't the good one.  It should be the one located in the properties' name field under the Event Hub Instance, but I was using the one from the Event Hub namespace properties. Hope that helps!
Hello @gcusello  I thank you for your response. So, is everything managed from the Forwarder Management section, including the management of non-clustered IDX and SH ?
Hi @BRFZ, you can use Deployment server to manage also Not clustered Indexers and not clustered Seach Head, even if it isn't a best practice. You cannot manage clustered SHs or IDXs. Ciao. Giuseppe
The jq was a suggestion, and an optional command to help with filtering, if they can't use it, then they have to find an alternative method.  
| inputlookup file_intel | stats count BY threat_key | eval lookup=1 | append [ | inputlookup ip_intel | stats count BY threat_key | eval lookup=2 | fields threat_key lookup ] | append [ | inputlooku... See more...
| inputlookup file_intel | stats count BY threat_key | eval lookup=1 | append [ | inputlookup ip_intel | stats count BY threat_key | eval lookup=2 | fields threat_key lookup ] | append [ | inputlookup http_intel | stats count BY threat_key | eval lookup=4 | fields threat_key lookup ] | stats sum(lookup) AS mask BY threat_key | search threat_key=*risklist_hrly* if mask is 7 the key is in all files if it is 6 it is in ip_intel and http_intel if it is 5 it is in file_intel and http_intel etc.
@richgalloway would be helpful , if you can give screenshot of working solution , as i have tried this as well and no luck
@deepakc and @isoutamo  , If this require installation of JQ , then it would not be possible , because if i want my customer to use the application , and prerequisite is to install the JQ widget , I ... See more...
@deepakc and @isoutamo  , If this require installation of JQ , then it would not be possible , because if i want my customer to use the application , and prerequisite is to install the JQ widget , I simply cant force my customer.
It looks to like you either have a problem with your data (raw events), your ingest config e.g. transforms.conf or your search query. Unfortunately, since you have shared none of these, it is rather ... See more...
It looks to like you either have a problem with your data (raw events), your ingest config e.g. transforms.conf or your search query. Unfortunately, since you have shared none of these, it is rather difficult to offer anything more constructive.
hi @gcusello  Thanks I will check and get back
With some of the events, we are facing the unexpected format of the query results. Actually in the raw event there is no issue at all, and each field is showing their own values. But when it is queri... See more...
With some of the events, we are facing the unexpected format of the query results. Actually in the raw event there is no issue at all, and each field is showing their own values. But when it is queried and displayed in the statistics section as results, the values of few fields are displaying incorrectly. Usually the search results show key-values. But with some events, the search results are showing as "fieldname1=fieldname1=value" and in some cases "fieldname1=fieldname3=value".  Example1: Request_id=Request_id=12345 (Expected to be -> "Request_id=12345") Example2: Parent_id=message_id=456 (Expected to be -> "Parent_id=321") Example3: Parent_id=category=unknown (Expected to be -> "Parent_id=321") Is this related with parser or something else? We are unable to find what could be the issue lying over here. Could anyone please help us on fixing this issue at the earliest?
I am trying to install splunk with GPO. Previously, I installed it locally on the machines with a batch file with additional installation parameters. Now I use the same batch file with a GPO and I g... See more...
I am trying to install splunk with GPO. Previously, I installed it locally on the machines with a batch file with additional installation parameters. Now I use the same batch file with a GPO and I get a system error 1376 "The specified local group does not exist" Same user works when I install locally. When I install locally I use domain\username. The user is used to run the splunk service.
Try Index=testing ("write" AND " @abc.com" ) What results do you get?
Hi  How to write spl search query by adding multiple field in single search    Field 1 - contain data like authorization " Write or Read "  Field 2 - contain user id details like " @abc.com , use... See more...
Hi  How to write spl search query by adding multiple field in single search    Field 1 - contain data like authorization " Write or Read "  Field 2 - contain user id details like " @abc.com , user1, user 2,  Question  How to write a spl query  Index =testing ("write" AND " @abc.com" )  spl query to add multiple filed which contain " write " AND "@abc.com" when these condition satisfied an alert has to been sent