All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi team, We have installed new Splunk UF in one of the file server  we have configured all the config files correctly. But data not getting forwarded from file server to Splunk.Its showing below e... See more...
Hi team, We have installed new Splunk UF in one of the file server  we have configured all the config files correctly. But data not getting forwarded from file server to Splunk.Its showing below error message if i'm searching with index name there is no data showing in search head.Please could you help on this ??      
Hi, We are interested in installing a Geo Server as mentioned here: Host a Geo Server (appdynamics.com) However, we cannot find the GeoServer.zip mentioned in the downloads. Where can we find this ... See more...
Hi, We are interested in installing a Geo Server as mentioned here: Host a Geo Server (appdynamics.com) However, we cannot find the GeoServer.zip mentioned in the downloads. Where can we find this .zip file? Thanks, Roberto
Hi Splunk Community, I need to build an alert that will be triggered if a specific signature is not present in the logs for a period of time. The message shows up in the logs every 3 or 4 secon... See more...
Hi Splunk Community, I need to build an alert that will be triggered if a specific signature is not present in the logs for a period of time. The message shows up in the logs every 3 or 4 seconds in BAU conditions, but there are some instances of longer intervals going up to 4 minutes. What I had in mind was a query that ran over a 15-time timeframe using 5-minute buckets - to ensure that I would catch the negative trend and not only the one offs. I have made it this far in the query:   index=its-em-pbus3-app "Checking the receive queue for a message of size" | bin _time span=5m aligntime=@m | eval day_of_week = strftime(_time,"%A") | where NOT (day_of_week="Saturday" OR day_of_week="Sunday") | eval date_hour = strftime(_time, "%H") | where (date_hour > 7 AND date_hour < 19) | stats count by _time   **I only need the results for Monday to Friday between the hours of 7AM and 7PM. The query returns the count by _time, which is great, but if the signature is not present I don't get any hits, obviously. So I can count the number of occurrences within the 5-minute buckets, but I can't assess the intervals or determine the absence using count. I thought of, perhaps, manipulating timestamps so I could calculate the difference between current time and the last timestamp of the event, but I am not exactly sure how to compare a timestamp to "now". I would appreciate if I could get some advice on either how to count "nulls" or how to cross-reference the timestamps of the signature against current time. Thank you all in advance.
Hi,  I am having some trouble understanding the right configuration for collecting the Logs from the Event Hub of the App "Microsoft Cloud Services".  From the documentation: Configure Event Hubs... See more...
Hi,  I am having some trouble understanding the right configuration for collecting the Logs from the Event Hub of the App "Microsoft Cloud Services".  From the documentation: Configure Event Hubs  it is not clear how to set these three parameters for a Log Source that collect A LOT of logs every minute.  interval -->  The number of seconds to wait before the Splunk platform runs the command again. The default is 3600 seconds. There is a way in the _internal logs to check when the command is executed?  max_batch_size --> The maximum number of events to retrieve in one batch. The default is 300. This is pretty clear, but can we increase this value as much as we want? I believe we encounter some performance issue on that.  max_wait_time -->  The maximum interval in seconds that the event processor will wait before processing. The default is 300 seconds. Processing what? Waiting for what? Anyone know a configuration of values between these three fields that could optimize an Event Hub with thousands and thousands of Logs ??
not able to search with any attribute which are having .(dot) like env.cookieSize NOT WORKING ------------------   index="ss-prd-dkp" "*price?sailingId=IC20240810&currencyIso=USD&categor... See more...
not able to search with any attribute which are having .(dot) like env.cookieSize NOT WORKING ------------------   index="ss-prd-dkp" "*price?sailingId=IC20240810&currencyIso=USD&categoryId=pt_internet" | spath status | search status=500 | spath "context.duration" | search "context.duration"="428.70000000006985"| spath "context.env.cookiesSize" | search "context.env.cookiesSize"=7670   WORKING   index="ss-prd-dkp" "*price?sailingId=IC20240810&currencyIso=USD&categoryId=pt_internet" | spath status | search status=500 | spath "context.duration" | search "context.duration"="428.70000000006985"   Let me know the solution for this  context: { [-] duration: 428.70000000006985 env.automation-bot: false env.cookiesSize: 7670 env.laneColor: blue }
We are receiving some notables that reference an encoded command being used with PowerShell, and the notable lists the command in question. The issue is that the command it is listing appears to be i... See more...
We are receiving some notables that reference an encoded command being used with PowerShell, and the notable lists the command in question. The issue is that the command it is listing appears to be incomplete when we decode the string. Does anyone know a way for us to potentially hunt down and figure out what the full encoded command referenced in the notable may be?
Hello splunkers, I am trying to achieve an export szenario to rapid7 in which all active directory data will be transfered to the other service. With the official guide from Splunk I can export th... See more...
Hello splunkers, I am trying to achieve an export szenario to rapid7 in which all active directory data will be transfered to the other service. With the official guide from Splunk I can export the data, but the data is not formatted in JSON. Instead every line is send by it's own, which leads that every attribute happens to be an own entry which won't help, because I can't search an log that is split into different pieces. Has anyone experience on the transfer process?
index=abc sourcetype=abc | timechart span=1m eval(count(IP)) AS TimeTaken Now I want to get 95th percentile of this total IP counts. like below. | stats perc95(TimeTaken) as Perc_95 by IP So h... See more...
index=abc sourcetype=abc | timechart span=1m eval(count(IP)) AS TimeTaken Now I want to get 95th percentile of this total IP counts. like below. | stats perc95(TimeTaken) as Perc_95 by IP So how should I write this query ?
Hi, I have a json-file in splunk with an arguments{}-field like this   field1=[content_field1] field2=[content_field2] field3=[content_field3]     splunk doesn't recognize the fields field1 et... See more...
Hi, I have a json-file in splunk with an arguments{}-field like this   field1=[content_field1] field2=[content_field2] field3=[content_field3]     splunk doesn't recognize the fields field1 etc. I assume it is because this is not really json format but I want to be sure. I can extract the files with rex but if splunk can recognize the fields automatically would be better. I think the content of the log-file should be something like this:   arguments{}:{"field1":"content_field1", "field2":"content_field2", "field3:"content_field3"}   but I want to be sure if that's the best way (because when it is the logging has to be changed). Does splunk recognize the fields automatically if events are logged in this way? Is the above mentioned the best way or are there better ways to let splunk recognize the fields automatically?    
Hi Team,   Could you please help me on installing pandas module for Phantom.   Regards, Harisha
we are trying to configure octopus deploy where data is sent via HEC and now i need to validate new logging locations in splunk to send logs...which are the logging locations to be considered..
Hello Splunkers!! I want to ingest below two pattern of events in Splunk and both are in json logs but there timestamp are different. So far I have used below attributes in my props.conf. Please let... See more...
Hello Splunkers!! I want to ingest below two pattern of events in Splunk and both are in json logs but there timestamp are different. So far I have used below attributes in my props.conf. Please let me know or suggest me if any any other attribute I need to add so my both the pattern of events parse smoothly without any time difference..   [exp_json] AUTO_KV_JSON = false DATETIME_CONFIG = INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true TIME_PREFIX = \"time\"\:\" category = Custom pulldown_type = true Pattern 1: {"datacontenttype":"application/json","data":{"identificationStatus":"NO_IDENTIFICATION_ATTEMPTED","location":"urn:topology:segment:1103.20.15-1103.20.19","carrierId":null,"trackingId":"dc268ac7-168a-11ef-b02a-1feae60bb414"},"subject":"CarrierPositionUpdate","messages":[],"specversion":"1.0","classofpayload":"com.vanderlande.conveyor.boundary.event.business.outbound.CarrierPositionUpdate","id":"8252fb03-2eb2-4619-a59b-24e3280f9bda","source":"conveyor","time":"2024-05-20T09:29:53.361800Z","type":"CarrierPositionUpdate"} Pattern 2: {"data":{"physicalId":"60040160041570014272","carrierTypeId":"18","carrierId":"60040160041570014272","prioritizedDestinations":[{"name":"urn:topology:location:Pallet Loop (DEP):OBD/Returnflow:Exit01","priority":1},{"name":"urn:topology:location:Pallet Loop (DEP):OBD/Returnflow:Exit02","priority":1}],"transportOrderId":"TO_00001399"},"topic":"transport-order-commands-conveyor","specversion":"1.0","time":"2024-05-22T18:02:16.669Z","id":"34A0DF56-B0B2-4A73-9D7B-034A94D49747","type":"AssignTransportOrder"} Thanks in advance!!
With some of the events, we are facing the unexpected format of the query results. Actually in the raw event there is no issue at all, and each field is showing their own values. But when it is queri... See more...
With some of the events, we are facing the unexpected format of the query results. Actually in the raw event there is no issue at all, and each field is showing their own values. But when it is queried and displayed in the statistics section as results, the values of few fields are displaying incorrectly. Usually the search results show key-values. But with some events, the search results are showing as "fieldname1=fieldname1=value" and in some cases "fieldname1=fieldname3=value".  Example1: Request_id=Request_id=12345 (Expected to be -> "Request_id=12345") Example2: Parent_id=message_id=456 (Expected to be -> "Parent_id=321") Example3: Parent_id=category=unknown (Expected to be -> "Parent_id=321") Is this related with parser or something else? We are unable to find what could be the issue lying over here. Could anyone please help us on fixing this issue at the earliest?
I am trying to install splunk with GPO. Previously, I installed it locally on the machines with a batch file with additional installation parameters. Now I use the same batch file with a GPO and I g... See more...
I am trying to install splunk with GPO. Previously, I installed it locally on the machines with a batch file with additional installation parameters. Now I use the same batch file with a GPO and I get a system error 1376 "The specified local group does not exist" Same user works when I install locally. When I install locally I use domain\username. The user is used to run the splunk service.
Hi  How to write spl search query by adding multiple field in single search    Field 1 - contain data like authorization " Write or Read "  Field 2 - contain user id details like " @abc.com , use... See more...
Hi  How to write spl search query by adding multiple field in single search    Field 1 - contain data like authorization " Write or Read "  Field 2 - contain user id details like " @abc.com , user1, user 2,  Question  How to write a spl query  Index =testing ("write" AND " @abc.com" )  spl query to add multiple filed which contain " write " AND "@abc.com" when these condition satisfied an alert has to been sent 
Hello, Could someone please help me with this question : should the clients of the deployment server only be forwarders, or can any component of the architecture (indexers, search heads) be a clie... See more...
Hello, Could someone please help me with this question : should the clients of the deployment server only be forwarders, or can any component of the architecture (indexers, search heads) be a client of the deployment server as well ?
Hi Team, I need help to create a alert which can raise if latest hour count is 10% less than last week same day same hour count.   for example: right now i can able to get count but not sure ho... See more...
Hi Team, I need help to create a alert which can raise if latest hour count is 10% less than last week same day same hour count.   for example: right now i can able to get count but not sure how to find  10%  or more difference to get alert. index=ABC sourcetype=XYZ | timechart span=1h count | timewrap d series=short    
Hello, I am getting the following error while trying to enable SAML for my deployment server :   Verification of SAML assertion using the IDP certificate provided failed. Unknown signer of SAML re... See more...
Hello, I am getting the following error while trying to enable SAML for my deployment server :   Verification of SAML assertion using the IDP certificate provided failed. Unknown signer of SAML response. Kindly provide any valuable suggestions
Looking for spl query to get the index wise log consumption for each months splitup for last 6 months
When checking the URL categorization for a URL, it appears that the URL has been classified under two categories, for example, Business/Economy and File Storage/Sharing. However, we can only see one ... See more...
When checking the URL categorization for a URL, it appears that the URL has been classified under two categories, for example, Business/Economy and File Storage/Sharing. However, we can only see one category in the Splunk field (field name: filter_category). Is this something to do with the data collection in Splunk? Any details is appreciated. Check the current WebPulse categorization for any URL: https://sitereview.bluecoat.com/#/