All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

What if we place the deploymentclient.conf in the forwarder inside the system/default as well as under the app/local.   for e.g. 1. under the system/default: deploymentclient.conf [deployment-cl... See more...
What if we place the deploymentclient.conf in the forwarder inside the system/default as well as under the app/local.   for e.g. 1. under the system/default: deploymentclient.conf [deployment-client] clientName = xyz phoneHomeIntervalInSecs = 5 [target-broker:deploymentServer] targetUri = xxx   2. under the app/local deploymentclient.conf [deployment-client] phoneHomeIntervalInSecs = 3600 Will the phone home take place after 3600 sec due to the higher precedence?  
Hi all, I've got two queries I'm trying to combine to track authorizations that are completed, or expire after a period of seven days.  The first query gets all of the authorizations sent, filtered ... See more...
Hi all, I've got two queries I'm trying to combine to track authorizations that are completed, or expire after a period of seven days.  The first query gets all of the authorizations sent, filtered by a unique AccountNum. Query 1:   earliest=-8d@d latest=-7d@d sourcetype="PCF:log" cf_app_name=app1 "Sending authorization" | rex field=msg "BAN: (?<AccountNum>\w+)" | dedup AccountNum    The second query returns all authorizations that have expired after a period of inactivity.  Query 2:   earliest=@d latest=now sourcetype="PCF:log" cf_app_name=app2 "authorizationExpired" | rex field=msg ",ban:(?<AccountNum>\w+)" | dedup AccountNum   The closest I've gotten to combining them how I need is: Query 3:   earliest=-8d@d latest=-7d@d sourcetype="PCF:log" cf_app_name=app1 "Sending authorization" | rex field=msg "BAN: (?<AccountNum>\w+)" | dedup AccountNum | append [search earliest=@d latest=now sourcetype="PCF:log" cf_app_name=app2 "authorizationExpired" | rex field=msg ",ban:(?<AccountNum>\w+)" | dedup AccountNum ] | fields msg | eval action=case( match(msg,"Sending authorization+"), "Total Authorizations Sent", match(msg,"authorizationExpired+"), "Authorizations Expired") | stats count(msg) by action     However, there are two mistakes/gaps with this third query. The first problem is I need the second query to only return results where AccountNum in query 2 is matching an AccountNum in query 1.  Secondly, I'd like to have a pie chart of Authorizations Expired (query 2) vs Authorizations Complete (total - expired = complete) but I'm struggling with the syntax on how to achieve that.  This third query shows total + expired, where expired is actually a subset of total.  I guess a third thing would be I don't know that append is really what I need or if there's a better, more performant way to construct this query?  I'd love to learn any helpful tips or tricks! Greatly appreciate any help
I am new to splunk. I need to find the difference in the two scan results from two different dates. Someone suggested the use of set and diff combination. Can anyone give me a lead on how to accompli... See more...
I am new to splunk. I need to find the difference in the two scan results from two different dates. Someone suggested the use of set and diff combination. Can anyone give me a lead on how to accomplish this?    Thank you
Hi ,  I am a  developer and looking to integrate my Prometheus with splunk  Prometheus is up and running at my local (http://localhost:9090) Objective is to post metrics data(Hyperledger fabri... See more...
Hi ,  I am a  developer and looking to integrate my Prometheus with splunk  Prometheus is up and running at my local (http://localhost:9090) Objective is to post metrics data(Hyperledger fabric metrics) from Prometheus directly to splunk for monitoring,  instead of Grafana For grafana i just need to provide the Data source to it ie.  (http://localhost:9090)  But i am unable to do the same for splunk Steps i followed : 1. Created a HTTP event listener (HEC) on splunk. 2. Fetched the Token  3. Passed the Token and url in Prometheus.yaml Prometheus_File_Changes 4. When i start the Prometheus container for establishing connection with splunk It gives me below error Error  Kindly guide me how to post my metrics/ Prometheus URL to splunk such that that i can monitor my metrics Do let me know if i miss onto any step .   @splunk  #Prometheus #metrics #HyperledgerFabricV2.0 #metrics #Hyper-ledger @Anonymous  #IBM
For some reason there are invisible bullet points being extracted from the windows event message and I cant seem to be able to remove them to use it as a time. The date gets extracted as the image b... See more...
For some reason there are invisible bullet points being extracted from the windows event message and I cant seem to be able to remove them to use it as a time. The date gets extracted as the image below and prevents me from using it as a dateTime.  How do you strip those out? The previous system shutdown at 10:48:10 AM on ‎6/‎11/‎2020 was unexpected. | rex field=Message "(?i)at\s(?P<shutdown_time>[^\s].+)\son\s(?P<shutdown_date>[^\s]+)" | eval shutdownAt=shutdown_date+" "+shutdown_time | eval shutdownepoch=strptime(shutdownAt,"%e/%d/%Y %I:%M:%S %p") This is unable to to assign shutdownepoch  
I am currently planning to forwards logs from multiple security devices to Splunk Enterprise.   I would like to know if its possible to extract fields from different log sources in a unified field ... See more...
I am currently planning to forwards logs from multiple security devices to Splunk Enterprise.   I would like to know if its possible to extract fields from different log sources in a unified field format.   For Eg: I forward logs from Firewall, Proxy and Endpoint Security products. Since different vendors would be following different naming conventions in their logs. Is it possible to standardize this when parsing and indexing. From all these device logs, i want certain important fields.
Hello guys, what are .rbsentinel files on clustered indexers? Could they conflict when thawing buckets if they have same bucket ID? Thanks.  
Hello, I have the following logs from Cron File successfully sent - AllOpenItemsPT_YYYYMMDD_HR-MM.csv.zip @08:00 File successfully sent - AllOpenItemsPT_YYYYMMDD_HR-MM.csv.zip @10:15 File suc... See more...
Hello, I have the following logs from Cron File successfully sent - AllOpenItemsPT_YYYYMMDD_HR-MM.csv.zip @08:00 File successfully sent - AllOpenItemsPT_YYYYMMDD_HR-MM.csv.zip @10:15 File successfully sent - AllOpenItemsPT_YYYYMMDD_HR-MM.csv.zip @11:00   File successfully sent - AllOpenItemsMaint_YYYYMMDD_HR-MM.csv.zip @07:00 File successfully sent - AllOpenItemsMaint_YYYYMMDD_HR-MM.csv.zip @09:00 File successfully sent - AllOpenItemsMaint_YYYYMMDD_HR-MM.csv.zip @13:00 File successfully sent - AllOpenItemsCOUNTRYNAME_YYYYMMDD_HR-MM.csv.zip @12:00 File successfully sent - AllOpenItemsCOUNTRYNAME_YYYYMMDD_HR-MM.csv.zip @14:30 File successfully sent - AllOpenItemsCOUNTRYNAME_YYYYMMDD_HR-MM.csv.zip @17:20     I am trying to group the files based on "AllOpenItems" string for last 24 hours and tried the below.   index=* namespace=* "File successfully sent -"| rex "File successfully sent - AllOpenItems(?<reptype>\w+)"|stats values(reptype) as ReportType by reptype The problem with the above is, I am unable to strip the Date&time the file name, so it won't group as per my requirement. Could someone assist please?
Hi, I have two add-ons installed on the same heavy forwarder, Splunk_TA_microsoft-cloudservices v4.0.1 AND TA-ms-loganalytics v1.0.3. I am seeing a conflict when both the add-ons are enabled at the... See more...
Hi, I have two add-ons installed on the same heavy forwarder, Splunk_TA_microsoft-cloudservices v4.0.1 AND TA-ms-loganalytics v1.0.3. I am seeing a conflict when both the add-ons are enabled at the same time. I have cloud services add on installed and running, and when i install the loganalytics add on, the loganalytics doesn't work and cloud services also gets stopped. I am getting the below errors when I enable both the add-ons at the same time. The errors occur when I enable both of the add-ons at the same time. Kindly help. on log analytics addon - Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [500]: Internal Server Error -- Traceback (most recent call last):\n File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/splunktaucclib/rest_handler/handler.py", line 113, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/splunktaucclib/rest_handler/handler.py", line 348, in _format_all_response\n self._encrypt_raw_credentials(cont['entry'])\n File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/splunktaucclib/rest_handler/handler.py", line 382, in _encrypt_raw_credentials\n change_list = rest_credentials.decrypt_all(data)\n File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/splunktaucclib/rest_handler/credentials.py", line 286, in decrypt_all\n all_passwords = credential_manager._get_all_passwords()\n File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/solnlib/utils.py", line 154, in wrapper\n return func(*args, **kwargs)\n File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/solnlib/credentials.py", line 272, in _get_all_passwords\n clear_password += field_clear[index]\nTypeError: cannot concatenate 'str' and 'NoneType' objects\n" message from "python /opt/splunk/etc/apps/TA-ms-loganalytics/bin/log_analytics.py" File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/splunktaucclib/global_config/configuration.py", line 270, in load message from "python /opt/splunk/etc/apps/TA-ms-loganalytics/bin/log_analytics.py" ucc_inputs = global_config.inputs.load(input_type=self.input_type) on cloud services add on- Unexpected error "<class 'splunktaucclib.rest_handler.error.RestError'>" from python handler: "REST Error [500]: Internal Server Error -- Traceback (most recent call last):\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/splunktaucclib/rest_handler/handler.py", line 116, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/splunktaucclib/rest_handler/handler.py", line 178, in all\n **query\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/solnlib/packages/splunklib/binding.py", line 289, in wrapper\n return request_fun(self, *args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/solnlib/packages/splunklib/binding.py", line 71, in new_f\n val = f(*args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/solnlib/packages/splunklib/binding.py", line 679, in get\n response = self.http.get(path, all_headers, **query)\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/solnlib/packages/splunklib/binding.py", line 1183, in get\n return self.request(url, { 'method': "GET", 'headers': headers })\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/solnlib/packages/splunklib/binding.py", line 1241, in request\n response = self.handler(url, message, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/solnlib/splunk_rest_client.py", line 145, in request\n verify=verify, proxies=proxies, cert=cert, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/solnlib/packages/requests/api.py", line 60, in request\n return session.request(method=method, url=url, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/solnlib/packages/requests/sessions.py", line 533, in request\n resp = self.send(prep, **send_kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/solnlib/packages/requests/sessions.py", line 646, in send\n r = adapter.send(request, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/solnlib/packages/requests/adapters.py", line 498, in send\n raise ConnectionError(err, request=request)\nConnectionError: ('Connection aborted.', BadStatusLine("''",))\n". See splunkd.log for more details. Stack trace from python handler:\nTraceback (most recent call last):\n File "/opt/splunk/lib/python2.7/site-packages/splunk/admin.py", line 93, in init_persistent\n hand.execute(info)\n File "/opt/splunk/lib/python2.7/site-packages/splunk/admin.py", line 594, in execute\n if self.requestedAction == ACTION_LIST: self.handleList(confInfo)\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunk_ta_mscs_rh_mscs_storage_blob.py", line 114, in handleList\n AdminExternalHandler.handleList(self, confInfo)\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/splunktaucclib/rest_handler/admin_external.py", line 51, in wrapper\n for entity in result:\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/splunktaucclib/rest_handler/handler.py", line 123, in wrapper\n raise RestError(500, traceback.format_exc())\nRestError: REST Error [500]: Internal Server Error -- Traceback (most recent call last):\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/splunktaucclib/rest_handler/handler.py", line 116, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/splunktaucclib/rest_handler/handler.py", line 178, in all\n **query\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/solnlib/packages/splunklib/binding.py", line 289, in wrapper\n return request_fun(self, *args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/solnlib/packages/splunklib/binding.py", line 71, in new_f\n val = f(*args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/solnlib/packages/splunklib/binding.py", line 679, in get\n response = self.http.get(path, all_headers, **query)\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/solnlib/packages/splunklib/binding.py", line 1183, in get\n return self.request(url, { 'method': "GET", 'headers': headers })\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/solnlib/packages/splunklib/binding.py", line 1241, in request\n response = self.handler(url, message, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/solnlib/splunk_rest_client.py", line 145, in request\n verify=verify, proxies=proxies, cert=cert, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/solnlib/packages/requests/api.py", line 60, in request\n return session.request(method=method, url=url, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/solnlib/packages/requests/sessions.py", line 533, in request\n resp = self.send(prep, **send_kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/solnlib/packages/requests/sessions.py", line 646, in send\n r = adapter.send(request, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunktamscs/solnlib/packages/requests/adapters.py", line 498, in send\n raise ConnectionError(err, request=request)\nConnectionError: ('Connection aborted.', BadStatusLine("''",))\n\n
HI All, I need your help in getting a value set for a particular keyword matching 2 parameters with eval statement. Below is  my query.     index=itsm ~truncated~ | eval CRITICAL=if(Imp... See more...
HI All, I need your help in getting a value set for a particular keyword matching 2 parameters with eval statement. Below is  my query.     index=itsm ~truncated~ | eval CRITICAL=if(Impact="1-Extensive" AND Urgency="1-Critical",1,0) | eval CRITICAL=if(Impact="2-Significant" AND Urgency="1-Critical",1,0) | eval CRITICAL=if(Impact="1-Extensive" AND Urgency="2-High",1,0) | eval HIGH=if(Impact="2-Significant" AND Urgency="3-Medium",1,0) | eval HIGH=if(Impact="2-Significant" AND Urgency="4-Low",1,0) | eval HIGH=if(Impact="2-Significant" AND Urgency="2-High",1,0) | eval HIGH=if(Impact="1-Extensive" AND Urgency="3-Medium",1,0) | eval HIGH=if(Impact="1-Extensive" AND Urgency="4-Low",1,0) | eval MEDIUM=if(Impact="3-Moderate" AND Urgency="1-Critical",1,0) | eval MEDIUM=if(Impact="3-Moderate" AND Urgency="2-High",1,0) | eval MEDIUM=if(Impact="3-Moderate" AND Urgency="3-Medium",1,0) | eval MEDIUM=if(Impact="4-Minor" AND Urgency="1-Critical",1,0) | eval MEDIUM=if(Impact="4-Minor" AND Urgency="2-High",1,0) | eval LOW=if(Impact="3-Moderate" AND Urgency="4-Low",1,0) | eval LOW=if(Impact="4-Minor" AND Urgency="4-Low",1,0) | eval LOW=if(Impact="4-Minor" AND Urgency="3-Medium",1,0) | table Incident_Number, Impact, Urgency, CRITICAL, HIGH, MEDIUM, LOW       I will get Incident_Number, Impact and Urgency from the index. I tried above combination, but am not getting exact value.   CRITICAL, HIGH, MEDIUM, LOW : are the combination of impact and urgency.   below is the table that am looking for. please help me with this.   Incident_Number Impact Urgency CRITICAL HIGH MEDIUM LOW INC000013677484 4-Minor 4-Low 0 0 0 1 INC000013677686 2-Significant 2-High 0 1 0 0
Hello,p lease I'd like to know if, and how, is it possible to deploy one app to a single specific member of a search head cluster. For example, in a scenario where I have:   - SH1, SH2 and SH3 -... See more...
Hello,p lease I'd like to know if, and how, is it possible to deploy one app to a single specific member of a search head cluster. For example, in a scenario where I have:   - SH1, SH2 and SH3 - App1 App2 App3 App4 App5 I would like to distribute: - App1, App2, App3, App4 to all SH members (SH1, SH2, SH3) - App5 to SH3 only Am I right if I only use the deployer to distribute App1, App2, App3, App4 and I put App5 manually on the SH3? Would it work, or App5 would be sync with SH1 and SH2, even if it was not deployed via deployer? Consider that App5 may contains searches and alerts (and , of course, related artifacts). Thanks and best regards, Luca.
Hello, I try to export a large log with CLI search below. It works well with a smaller log return, but giving error on large logs, FATAL: The search job terminated unexpectedly. For instance, this ... See more...
Hello, I try to export a large log with CLI search below. It works well with a smaller log return, but giving error on large logs, FATAL: The search job terminated unexpectedly. For instance, this search on Pan_logs terminated: /opt/splunk/bin/splunk search "index=pan_logs earliest=-7d" -preview 0 -maxout 0 -output rawdata | gzip > pan_logs_7days.gz Anyone knows how to resolve this issue? Thanks,
Hi Team, I want to know what is the metric to be used for the minimum number of node availability. My requirement is: My application is hosted in the cloud and I need a minimum of 3 servers to... See more...
Hi Team, I want to know what is the metric to be used for the minimum number of node availability. My requirement is: My application is hosted in the cloud and I need a minimum of 3 servers to be up and running every time out of 5. Please help me. Thanks Datta
Hi We are using Splunk Cloud from azure marketplace.  I have created HEC token but I have problem send data to the Splunk Cloud.  I am testing some different port but it doesn't work. Same approac... See more...
Hi We are using Splunk Cloud from azure marketplace.  I have created HEC token but I have problem send data to the Splunk Cloud.  I am testing some different port but it doesn't work. Same approach on my  Splunk Cloud trial instances is working. Working- My test instance: curl -k https://prd-p-<label>.splunkcloud.com:8088/services/collector/event/1.0 -H "Authorization: Splunk <token>" -d '{"event": "hello world"}' Azure Splunk Cloud: NOT work mu commercial company instance: curl -k https://<company>.splunkcloud.com:8088/services/collector/event/1.0 -H 'Authorization: Splunk <token>' -d '{"event": "hello world"}' curl -k https://<company>.splunkcloud.com/services/collector/event/1.0 -H 'Authorization: Splunk <token>' -d '{"event": "hello world"}' Anybody know how to send data via HEC to the Splunk Cloud hosted as Azure service ?   Thanks a lot 
Hi , I am having an issue , we have 3 search heads in cluster and are currently handle by a load balancer. some times my Web GUI is too slow its not even loading the search bar is not coming up , it... See more...
Hi , I am having an issue , we have 3 search heads in cluster and are currently handle by a load balancer. some times my Web GUI is too slow its not even loading the search bar is not coming up , its getting hang and we can only see loading screen. I am unable to find out whats the issue , is it the load balance or networking issue not sure , If any one can guide me on it will be helpfull.    
Hi.  I have disk space issue with indexer. where there is 92% utilization in opt/splunkdata dir.  and most space consuming files in this directory are db files, such as "_internal_db" and some other... See more...
Hi.  I have disk space issue with indexer. where there is 92% utilization in opt/splunkdata dir.  and most space consuming files in this directory are db files, such as "_internal_db" and some other temp folders, which also contain dbs. I'm not sure which of them to clear. Almost all files in directory are db.  could please suggest want kind of data can deleted to free some space without loosing important data.  Thanks in advance. 
I am new to Splunk and have a question about Asset and Identity data modle.  We are on ES 5.3.0. I am trying to load data into Asset and Identify model, need to add some custom fields in addtion to t... See more...
I am new to Splunk and have a question about Asset and Identity data modle.  We are on ES 5.3.0. I am trying to load data into Asset and Identify model, need to add some custom fields in addtion to the default fields.  I tried to add to the main asset fields, also tried to add to the calculated fields.  But when I run |`assets`,  it des not show the custom fields I added.  Any ideas?  Also I did not find in the 5.3.0 document on  how to add the custom fields. Only see it in 6.1.1 ES document.
Hi, I am looking to build a dashboard where I can track all the email sent on configured alerts.  I have used  Alert manager App where I can get these details except couple of details.  I am a... See more...
Hi, I am looking to build a dashboard where I can track all the email sent on configured alerts.  I have used  Alert manager App where I can get these details except couple of details.  I am also looking  to get the details of recepient list and subject(optional),  which I guess not avaiable.  I am aware of internal logs (sourcetype=splunk_python) where we get details of all email sent by splunk. but I couldn't find a way to map those details with details in alert manager. splunk_python log 2020-06-10 17:15:14,882 +0200 INFO sendemail:139 - Sending email. subject="[PS_PI45_KERNEL_ELEMENTS] Splunk CPU Alert", results_link="http://splunk:8000/apps/xxxxx/@go?sid=scheduler__admin__xxxxx__RMD56802a2f6671046cd_at_1591802100_15918", recipients="[u'xyz@xxxxx.com', u'abc@xxxxx.com']", server="smtp.murex.com" Alert Manager   Thanks
Hi I'm using .Net (Splunk.Client) to search splunk data (firewall logs). Code is similar to this:     using (SearchResultStream stream = await service.SearchOneShotAsync(search)) { foreach (Sea... See more...
Hi I'm using .Net (Splunk.Client) to search splunk data (firewall logs). Code is similar to this:     using (SearchResultStream stream = await service.SearchOneShotAsync(search)) { foreach (SearchResult searchResult in stream) { string src_ip = searchResult.GetValue(searchResult.FieldNames[0]); string dst_ip = searchResult.GetValue(searchResult.FieldNames[1]); string port = searchResult.GetValue(searchResult.FieldNames[2]); string protocol = searchResult.GetValue(searchResult.FieldNames[3]); result += src_ip + "," + dst_ip + "," + port + "," + protocol + "\r\n"; } Console.WriteLine("\r\n"); }     My query takes the expected time to complete (around 75 minutes) but when parsing the result it throws an exception: "Read character data where <results> or <response> was expected". I'm using the same query string on different search indexes. Only one of them fails, the others are working just fine. Any idea what could be causing this? What's the meaning of this error? Can't find it documented anywhere.
Hi All Im trying to install on my rke cluster and have read the documentation. There seem to be no instructions on how to deploy the operator first. After creating a namespace, it jumps straigh... See more...
Hi All Im trying to install on my rke cluster and have read the documentation. There seem to be no instructions on how to deploy the operator first. After creating a namespace, it jumps straight to 'deploy cluster agent operator yml' without any predefined yml code I tried the github readme as well, do I need to download the entire deploy directory from github or just the operator yml?