All Topics

Top

All Topics

Configured the otelcol-contrib  agent.config.yaml file to send the data to splunk cloud. i'm getting the data but the source is coming as HEC Token name.  receivers is configured to read different f... See more...
Configured the otelcol-contrib  agent.config.yaml file to send the data to splunk cloud. i'm getting the data but the source is coming as HEC Token name.  receivers is configured to read different files.  filelog/sys: include: [ /var/log/messages, /var/log/auth, /var/log/mesg$$, /var/log/cron$$, /var/log/acpid$$ ] start_at: beginning include_file_path: true include_file_name: false   Exporters:  in the exporters didn't mention the source. option 1 : By default Splunk is taking the HEC token name as it's value. option 2 : I can give the value as log file path but for multiple files, it's not working .    in splunk cloud, source is - HEC Token value                       log_file_path field is giving the log file path.  Is there a way i can configure the source to take the log file path.  
One of my customers is using a tool with a rest API available via SAP ALM Analytics API Ref. https://api.sap.com/api/CALM_ANALYTICS/overview They are looking to get data from the API into a Splu... See more...
One of my customers is using a tool with a rest API available via SAP ALM Analytics API Ref. https://api.sap.com/api/CALM_ANALYTICS/overview They are looking to get data from the API into a Splunk Index, so we suggest having an intermediary application (like a Scheduled Function) to get data from SAP and send it to Splunk using an HEC Token. Is it possible to use something at Splunk directly to pull the data from 3rd party? Or is the suggested approach a good idea to go?  
I am calling the trace endpoint https://ingest.<realm>.signalfx.com/v2/trace/signalfxv1 and sending this span in the body: [ { "id": "003cfb6642471ba4", "traceId": "0025ecb5dc31498b931bce60be07... See more...
I am calling the trace endpoint https://ingest.<realm>.signalfx.com/v2/trace/signalfxv1 and sending this span in the body: [ { "id": "003cfb6642471ba4", "traceId": "0025ecb5dc31498b931bce60be0784cd", "name": "reloadoptionsmanager", "timestamp": 1716477080494000, "kind": "SERVER", "remoteEndpoint": { "serviceName": "XXXXXXX" }, "Tags": { "data": "LogData", "eventId": "0", "severity": "8" } } ] The request receives a 200 response. The response body is OK. However the span does not appear in APM. timestamp is the the number of microseconds since 1/1/1970.  
We have a contractor installing a Splunk instance for us.  For search heads, we have an NVMe volume mounted for the /opt/splunk/var/run folder.  The ./run folder is owned by 'root' and the 'splunk' u... See more...
We have a contractor installing a Splunk instance for us.  For search heads, we have an NVMe volume mounted for the /opt/splunk/var/run folder.  The ./run folder is owned by 'root' and the 'splunk' user cannot write into the folder. Similar, our indexers have a mounted NVMe volume for the /opt/splunk/var/lib folder and it too is owned by 'root'.  Index folders and files are located one level below that in the ./lib/splunk folder where the 'splunk' user is the owner. What are the consequences of having 'root' own these folders on the operation of Splunk?  I assumed that when Splunk is running as non-root, that it must be the owner of all folders and files from /opt/splunk on down.  Am I wrong?
So, I have a loadjob with all the data I need with a primary field (account number). But, I have a CSV with about 104K account number that they only want in this report. How do I filter only 104K acc... See more...
So, I have a loadjob with all the data I need with a primary field (account number). But, I have a CSV with about 104K account number that they only want in this report. How do I filter only 104K account numbers in this load job?  I don't have access to admin to change the join limit... Can Lookups do the job? I also don't want the values to grouped together in each row... I just want to remove the account numbers that are not on the csv from the loadjob...  
Hi All, In the table i have URL .So if i am clicking on the URL it will redirect existing dashboard. Now i want to show only click here .If i click then existing dashboard is to be open. Instead URL... See more...
Hi All, In the table i have URL .So if i am clicking on the URL it will redirect existing dashboard. Now i want to show only click here .If i click then existing dashboard is to be open. Instead URL value i need click here and to open the URL This is my table  Name Interface Link  DSR1 DSR1 www.google.com  DSR2 DSR2 www.splunk.com 
Hi team, We have installed new Splunk UF in one of the file server  we have configured all the config files correctly. But data not getting forwarded from file server to Splunk.Its showing below e... See more...
Hi team, We have installed new Splunk UF in one of the file server  we have configured all the config files correctly. But data not getting forwarded from file server to Splunk.Its showing below error message if i'm searching with index name there is no data showing in search head.Please could you help on this ??      
Hi, We are interested in installing a Geo Server as mentioned here: Host a Geo Server (appdynamics.com) However, we cannot find the GeoServer.zip mentioned in the downloads. Where can we find this ... See more...
Hi, We are interested in installing a Geo Server as mentioned here: Host a Geo Server (appdynamics.com) However, we cannot find the GeoServer.zip mentioned in the downloads. Where can we find this .zip file? Thanks, Roberto
Hi Splunk Community, I need to build an alert that will be triggered if a specific signature is not present in the logs for a period of time. The message shows up in the logs every 3 or 4 secon... See more...
Hi Splunk Community, I need to build an alert that will be triggered if a specific signature is not present in the logs for a period of time. The message shows up in the logs every 3 or 4 seconds in BAU conditions, but there are some instances of longer intervals going up to 4 minutes. What I had in mind was a query that ran over a 15-time timeframe using 5-minute buckets - to ensure that I would catch the negative trend and not only the one offs. I have made it this far in the query:   index=its-em-pbus3-app "Checking the receive queue for a message of size" | bin _time span=5m aligntime=@m | eval day_of_week = strftime(_time,"%A") | where NOT (day_of_week="Saturday" OR day_of_week="Sunday") | eval date_hour = strftime(_time, "%H") | where (date_hour > 7 AND date_hour < 19) | stats count by _time   **I only need the results for Monday to Friday between the hours of 7AM and 7PM. The query returns the count by _time, which is great, but if the signature is not present I don't get any hits, obviously. So I can count the number of occurrences within the 5-minute buckets, but I can't assess the intervals or determine the absence using count. I thought of, perhaps, manipulating timestamps so I could calculate the difference between current time and the last timestamp of the event, but I am not exactly sure how to compare a timestamp to "now". I would appreciate if I could get some advice on either how to count "nulls" or how to cross-reference the timestamps of the signature against current time. Thank you all in advance.
Hi,  I am having some trouble understanding the right configuration for collecting the Logs from the Event Hub of the App "Microsoft Cloud Services".  From the documentation: Configure Event Hubs... See more...
Hi,  I am having some trouble understanding the right configuration for collecting the Logs from the Event Hub of the App "Microsoft Cloud Services".  From the documentation: Configure Event Hubs  it is not clear how to set these three parameters for a Log Source that collect A LOT of logs every minute.  interval -->  The number of seconds to wait before the Splunk platform runs the command again. The default is 3600 seconds. There is a way in the _internal logs to check when the command is executed?  max_batch_size --> The maximum number of events to retrieve in one batch. The default is 300. This is pretty clear, but can we increase this value as much as we want? I believe we encounter some performance issue on that.  max_wait_time -->  The maximum interval in seconds that the event processor will wait before processing. The default is 300 seconds. Processing what? Waiting for what? Anyone know a configuration of values between these three fields that could optimize an Event Hub with thousands and thousands of Logs ??
not able to search with any attribute which are having .(dot) like env.cookieSize NOT WORKING ------------------   index="ss-prd-dkp" "*price?sailingId=IC20240810&currencyIso=USD&categor... See more...
not able to search with any attribute which are having .(dot) like env.cookieSize NOT WORKING ------------------   index="ss-prd-dkp" "*price?sailingId=IC20240810&currencyIso=USD&categoryId=pt_internet" | spath status | search status=500 | spath "context.duration" | search "context.duration"="428.70000000006985"| spath "context.env.cookiesSize" | search "context.env.cookiesSize"=7670   WORKING   index="ss-prd-dkp" "*price?sailingId=IC20240810&currencyIso=USD&categoryId=pt_internet" | spath status | search status=500 | spath "context.duration" | search "context.duration"="428.70000000006985"   Let me know the solution for this  context: { [-] duration: 428.70000000006985 env.automation-bot: false env.cookiesSize: 7670 env.laneColor: blue }
We are receiving some notables that reference an encoded command being used with PowerShell, and the notable lists the command in question. The issue is that the command it is listing appears to be i... See more...
We are receiving some notables that reference an encoded command being used with PowerShell, and the notable lists the command in question. The issue is that the command it is listing appears to be incomplete when we decode the string. Does anyone know a way for us to potentially hunt down and figure out what the full encoded command referenced in the notable may be?
Hello splunkers, I am trying to achieve an export szenario to rapid7 in which all active directory data will be transfered to the other service. With the official guide from Splunk I can export th... See more...
Hello splunkers, I am trying to achieve an export szenario to rapid7 in which all active directory data will be transfered to the other service. With the official guide from Splunk I can export the data, but the data is not formatted in JSON. Instead every line is send by it's own, which leads that every attribute happens to be an own entry which won't help, because I can't search an log that is split into different pieces. Has anyone experience on the transfer process?
index=abc sourcetype=abc | timechart span=1m eval(count(IP)) AS TimeTaken Now I want to get 95th percentile of this total IP counts. like below. | stats perc95(TimeTaken) as Perc_95 by IP So h... See more...
index=abc sourcetype=abc | timechart span=1m eval(count(IP)) AS TimeTaken Now I want to get 95th percentile of this total IP counts. like below. | stats perc95(TimeTaken) as Perc_95 by IP So how should I write this query ?
Hi, I have a json-file in splunk with an arguments{}-field like this   field1=[content_field1] field2=[content_field2] field3=[content_field3]     splunk doesn't recognize the fields field1 et... See more...
Hi, I have a json-file in splunk with an arguments{}-field like this   field1=[content_field1] field2=[content_field2] field3=[content_field3]     splunk doesn't recognize the fields field1 etc. I assume it is because this is not really json format but I want to be sure. I can extract the files with rex but if splunk can recognize the fields automatically would be better. I think the content of the log-file should be something like this:   arguments{}:{"field1":"content_field1", "field2":"content_field2", "field3:"content_field3"}   but I want to be sure if that's the best way (because when it is the logging has to be changed). Does splunk recognize the fields automatically if events are logged in this way? Is the above mentioned the best way or are there better ways to let splunk recognize the fields automatically?    
Hi Team,   Could you please help me on installing pandas module for Phantom.   Regards, Harisha
we are trying to configure octopus deploy where data is sent via HEC and now i need to validate new logging locations in splunk to send logs...which are the logging locations to be considered..
Hello Splunkers!! I want to ingest below two pattern of events in Splunk and both are in json logs but there timestamp are different. So far I have used below attributes in my props.conf. Please let... See more...
Hello Splunkers!! I want to ingest below two pattern of events in Splunk and both are in json logs but there timestamp are different. So far I have used below attributes in my props.conf. Please let me know or suggest me if any any other attribute I need to add so my both the pattern of events parse smoothly without any time difference..   [exp_json] AUTO_KV_JSON = false DATETIME_CONFIG = INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true TIME_PREFIX = \"time\"\:\" category = Custom pulldown_type = true Pattern 1: {"datacontenttype":"application/json","data":{"identificationStatus":"NO_IDENTIFICATION_ATTEMPTED","location":"urn:topology:segment:1103.20.15-1103.20.19","carrierId":null,"trackingId":"dc268ac7-168a-11ef-b02a-1feae60bb414"},"subject":"CarrierPositionUpdate","messages":[],"specversion":"1.0","classofpayload":"com.vanderlande.conveyor.boundary.event.business.outbound.CarrierPositionUpdate","id":"8252fb03-2eb2-4619-a59b-24e3280f9bda","source":"conveyor","time":"2024-05-20T09:29:53.361800Z","type":"CarrierPositionUpdate"} Pattern 2: {"data":{"physicalId":"60040160041570014272","carrierTypeId":"18","carrierId":"60040160041570014272","prioritizedDestinations":[{"name":"urn:topology:location:Pallet Loop (DEP):OBD/Returnflow:Exit01","priority":1},{"name":"urn:topology:location:Pallet Loop (DEP):OBD/Returnflow:Exit02","priority":1}],"transportOrderId":"TO_00001399"},"topic":"transport-order-commands-conveyor","specversion":"1.0","time":"2024-05-22T18:02:16.669Z","id":"34A0DF56-B0B2-4A73-9D7B-034A94D49747","type":"AssignTransportOrder"} Thanks in advance!!
With some of the events, we are facing the unexpected format of the query results. Actually in the raw event there is no issue at all, and each field is showing their own values. But when it is queri... See more...
With some of the events, we are facing the unexpected format of the query results. Actually in the raw event there is no issue at all, and each field is showing their own values. But when it is queried and displayed in the statistics section as results, the values of few fields are displaying incorrectly. Usually the search results show key-values. But with some events, the search results are showing as "fieldname1=fieldname1=value" and in some cases "fieldname1=fieldname3=value".  Example1: Request_id=Request_id=12345 (Expected to be -> "Request_id=12345") Example2: Parent_id=message_id=456 (Expected to be -> "Parent_id=321") Example3: Parent_id=category=unknown (Expected to be -> "Parent_id=321") Is this related with parser or something else? We are unable to find what could be the issue lying over here. Could anyone please help us on fixing this issue at the earliest?
I am trying to install splunk with GPO. Previously, I installed it locally on the machines with a batch file with additional installation parameters. Now I use the same batch file with a GPO and I g... See more...
I am trying to install splunk with GPO. Previously, I installed it locally on the machines with a batch file with additional installation parameters. Now I use the same batch file with a GPO and I get a system error 1376 "The specified local group does not exist" Same user works when I install locally. When I install locally I use domain\username. The user is used to run the splunk service.