All Topics

Top

All Topics

I want to extract JSON data alone into key value pairs and JSON is not fixed it can extend to extra lines. Everything need to be done on indexer level and nothing on search head.   Sample:   2024... See more...
I want to extract JSON data alone into key value pairs and JSON is not fixed it can extend to extra lines. Everything need to be done on indexer level and nothing on search head.   Sample:   2024-03-11T20:58:12.605Z [INFO] SessionManager sgrp:System_default swn:99999 sreq:1234567 | {"abrMode":"NA","abrProto":"HLS","event":"Create","sUrlMap":"","sc":{"Host":"x.x.x.x","OriginMedia":"HLS","URL":"/x.x.x.x/vod/Test-XXXX/XXXXX.smil/transmux/XXXXX"},"sm":{"ActiveReqs":0,"ActiveSecs":0,"AliveSecs":360,"MediaSecs":0,"SpanReqs":0,"SpanSecs":0},"swnId":"XXXXXXXX","wflow":"System_default"} 2024-03-11T20:58:12.611Z [INFO] SessionManager sgrp:System_default swn:99999 sreq:1234567 | {"abrMode":"NA","abrProto":"HLS","event":"Cache","sUrlMap":"","sc":{"Host":"x.x.x.x","OriginMedia":"HLS","URL":"/x.x.x.x/vod/Test-XXXXXX/XXXXXX.smil/transmux/XXX"},"sm":{"ActiveReqs":0,"ActiveSecs":0,"AliveSecs":0,"MediaSecs":0,"SpanReqs":0,"SpanSecs":0},"swnId":"XXXXXXXXXXXXX","wflow":"System_default"}
ACCU_DILAMZ9884 Failed, cueType=Splicer, SpliceEventID=0x00000BBC, SessionID=0x1A4D3100 SV event=454708529 spot=VAF00376_i pos=1 dur=0 Result=110 No Insertion Channel Found I want to extract the wor... See more...
ACCU_DILAMZ9884 Failed, cueType=Splicer, SpliceEventID=0x00000BBC, SessionID=0x1A4D3100 SV event=454708529 spot=VAF00376_i pos=1 dur=0 Result=110 No Insertion Channel Found I want to extract the words that come after Result=XXX And not include the Result=xxx in the output. |rex field=Message "(?<Test>\bResult.*\D+)"   This produces this output>>> Result=110 No Insertion Channel Found.   So I want to exclude the Results=XXX   
Hi community,    I am trying to connect to the DB connect app and i am constantly redirected to http://$HOST/en-US/app/splunk_app_db_connect/ftr What is the FTR and how can I get rid of thi... See more...
Hi community,    I am trying to connect to the DB connect app and i am constantly redirected to http://$HOST/en-US/app/splunk_app_db_connect/ftr What is the FTR and how can I get rid of this error or force a redirection to a DB app that will work.  I tried deleting the app folder in the ($SPLUNK_HOME/etc/apps) directory and reinstalling but still getting the same error. Any assistance here will be greatly appreciated.     
Suppose I have `/var/log/nginx/access.log` and then a dozen files in the same directory named like `access.log-<date>.gz`. When Splunk processes the gzip'd files, is it supposed to index them under t... See more...
Suppose I have `/var/log/nginx/access.log` and then a dozen files in the same directory named like `access.log-<date>.gz`. When Splunk processes the gzip'd files, is it supposed to index them under the `/var/log/nginx/access.log` source? I ask because I've noticed that these gzip files show up when I query: ``` source="/var/log/nginx/access.log*" | stats count by source ```   I'd appreciate a link to docs regarding this, I couldn't find any. Thanks!
We use the Qualys Technical Add-On to pull vulnerability data into Splunk. We run it on our Inputs Data Manager We'd like to include additional fields in our data pulls, but in order to do that we ne... See more...
We use the Qualys Technical Add-On to pull vulnerability data into Splunk. We run it on our Inputs Data Manager We'd like to include additional fields in our data pulls, but in order to do that we need to go to the setup page. When going to the setup page on the IDM it never loads and we see this data in web_service.log 2024-09-03 21:26:32,726 INFO  __init__:654 - Authorization Failed: b'{"messages":[{"type":"ERROR","text":"You (user=myusername) do not have permission to perform this operation (requires capability: edit_telemetry_settings)."}]}' From what I've been told edit_telemetry_settings can only be assigned to admins, not sc_admins. So no one has access to get to the setup page.  Qualys is telling me that they have others users with IDMs that are using the Qualys TA fine, but our issue has persisted across restarts, multiple environments and multiple TA versions. Can anyone confirm they can load the setup for the Qualys TA page from an IDM?    
I recently issued a "splunk set default-hostname <hostname>" on a new node I added to our search cluster. It ended up replicating etc/system/local/inputs.conf to all other members, so obviously, all ... See more...
I recently issued a "splunk set default-hostname <hostname>" on a new node I added to our search cluster. It ended up replicating etc/system/local/inputs.conf to all other members, so obviously, all search members began logging their events with the same 'host' field. So, if I want to avoid this in the future,  how do I leverage conf_replication_summary.excludelist to blacklist the file from replication? I'm thinking that it'd be something like this, but I really don't know as I've never used this flag before. [shclustering] conf_replication_summary.excludelist.inputs = etc[/\\]system[/\\]local[/\\]inputs\.conf Thank you.
I need to run Splunk Stream on some universal forwarders to capture data from a set of servers. The only way I've been able to do this is by running splunkd as root, which is not viable in production... See more...
I need to run Splunk Stream on some universal forwarders to capture data from a set of servers. The only way I've been able to do this is by running splunkd as root, which is not viable in production. I am deploying Splunk_TA_stream 8.1.3 to the forwarders using a deployment server; forwarders are configured for boot-start. I've followed the documentation on installing the add-on and running set_permissions.sh to change the binary to run as root. However, restarting splunk reverts the permissions on the streamfwd binary and streaming fails to start, throwing the errors below.  If I modify the service to run as root stream works as expected. (CaptureServer.cpp:2338) stream.CaptureServer - SnifferReactor was unable to start packet capturesniffer (SnifferReactor/PcapNetworkCapture.cpp:238) stream.NetworkCapture - SnifferReactor unrecognized link layer for device <ens192>: 253 The servers I need to stream from are all running Red Hat 9.4 on VMWare 8 using VMXNET 3 NICs. I'm aware of workarounds others have come up with, but we need a permanent solution to this problem. streamfwd app error in /var/log/splunk/streamfwd.l... - Splunk Community
A little background.  Our organization set up hundreds of service templates when we rolled out ITSI.   We're trying to clean up unwanted KPI's in these services.  I have one KPI that I want off of ... See more...
A little background.  Our organization set up hundreds of service templates when we rolled out ITSI.   We're trying to clean up unwanted KPI's in these services.  I have one KPI that I want off of all the service templates.   The manual process of navigating  1) Configuration 2) Service Monitoring 3) Service Templates 4) Search for a service 5) edit 6) click the X on the unwanted KPI 7) Save the template Propagate the change   Is taking forever to do in bulk.     Is there a faster way?
Hello Community, We are being hit with a massive spam attack this is still ongoing.  For the time being, I have had to make the difficult decision to lock down the community from any new posts or... See more...
Hello Community, We are being hit with a massive spam attack this is still ongoing.  For the time being, I have had to make the difficult decision to lock down the community from any new posts or replies being made until further steps can be taken to stop the bot attack. I apologize if you received numerous unwanted community emails as a result. Although our filters managed to catch most of the spam, some still made it through. I am currently reviewing our filtering systems to identify improvements and ensure that such incidents are minimized in the future. I'm doing my best to review all the flagged spam posts to try and catch any false reports. If you noticed your post was caught by spam, please send me a Private Message and I can find it and remove it.  Thank you for your understanding and patience. We appreciate your continued support and apologize for any inconvenience this may have caused. Best, Ryan, Cisco AppDynamics Community Manager
Introduction This blog post is part of an ongoing series on SOCK enablement. In this blog post, I will write about parsing messages to extract valuable information, and then process it consistent... See more...
Introduction This blog post is part of an ongoing series on SOCK enablement. In this blog post, I will write about parsing messages to extract valuable information, and then process it consistently across entries. SOCK (Splunk OpenTelemetry Collector for Kubernetes) can be used to process many different kinds of data, one of the most common ones being logs extracted from log files. We use operators to extract and parse information - operators being the most basic units of log processing. As an example, by default, the filelog receiver in the SOCK pipeline uses various operators to extract information from the incoming logs and log file path. This information includes, but is not limited to: namespace, pod, uid, and container name from the log file’s path time, log level, log-tag and log message from the actual log body In later stages of the pipeline, this information is used to enrich the attributes of the log. For example: com.splunk.sourcetype field is set from the container name com.splunk.source field is set from the log file’s path So, if the full path of the container’s log file is: /var/log/pods/kube-system_etcd/etcd/0.log, then com.splunk.source value will be set to this value - we understand the path of the file as its source There might be scenarios where you would like to set a different source other than the default one (i.e. log’s file path) or there is a need to extract some extra attributes from the log message. This article explains how to do it. Operators The OpenTelemetry Collector comes with a set of operators. From README:     An operator is the most basic unit of log processing. Each operator fulfills a single responsibility, such as reading lines from a file, or parsing JSON from a field. Operators are then chained together in a pipeline to achieve a desired result. For instance, a user may read lines from a file using the file_input operator. From there, the results of this operation may be sent to a regex_parser operator that creates fields based on a regex pattern. And then finally, these results may be sent to a file_output operator that writes each line to a file on disk.       Under the hood, SOCK uses a pipeline of several operators to extract the information from the log. We will look at an example of logs produced by containerd - it is one of the runtimes commonly used to run containers (a different runtime could be one like docker). Let’s look at a snippet of an operator from SOCK used to extract data from containerd runtime logs:     - type: regex_parser id: parser-containerd regex: '^(?P<time>[^ ]+) (?P<stream>stdout|stderr) (?P<logtag>[^ ]*) ?(?P<log>.*)$' timestamp: parse_from: attributes.time layout_type: gotime layout: '2006-01-02T15:04:05.999999999Z07:00'   The actual log being read from the file and going into the operator might look like this:   2023-12-27T12:14:05.227298808+00:00 stderr F Hello World   The above operator does a simple thing. It extracts the following data from log messages based on the regular expression:  time: it is set to “2023-12-27T12:14:05.227298808+00:00” stream: “stderr” matches with one of the two possible stream types (stdout or stderr) logtag: is set to “F”  log: “Hello World” - this is an actual log message  Our regex operator extracts these values and inserts them into the event body.   Structure of the log message in the operator pipeline Before we continue, we should learn about the log message format inside the pipeline. Knowing this will help us to apply our own custom operators later. Suppose, we have a slightly different message from containerd:   2023-12-27T12:14:05.227298808+00:00 stderr F Hello World source=xyz   The entry for the above log will look like this in the operator pipeline:   { "timestamp": "2024-05-27T12:21:03.769505512Z", "body": "2024-05-27T12:14:05.227298808+00:00 stderr F Hello World source=xyz", "attributes": { "log": "Hello World source=xyz", "log.iostream": "stderr", "logtag": "F", "time": "2024-05-27T12:14:05.227298808+00:00", }, "resource": { "com.splunk.source": "/var/log/pods/(path to my file)", "com.splunk.sourcetype": "kube:container:(container name)", "k8s.container.name": "(container name)", "k8s.container.restart_count": "0", "k8s.namespace.name": "default", "k8s.pod.name": "(pod name)", "k8s.pod.uid": "(pod uid)", }, "severity": 0, "scope_name": "" }   See how for every info extracted from the log message, there is a corresponding match in the above entry in the attributes field. Regex_parser inserts values into the attributes field by default but this behavior can be changed with the parse_to option. As we can also see there is a log.iostream key in our message, even though we expected stream instead. This is because there is another operator later on in the pipeline that changes it, it looks like this:   - from: attributes.stream to: attributes["log.iostream"] type: move   This operator is used for simple move operations, as we can see it moves the stream field into log.iostream. How do you use custom operators? As an example, let’s consider the same log we saw earlier i.e.   2023-12-27T12:14:05.227298808+00:00 stderr F Hello World source=xyz   What if we want to extract the source from the above message and set it into com.splunk.source resource? Doing that would allow us to assign custom source values based on a log message instead of the path to the file - which is a default behavior. For such a use case, we may create the following operators:   - type: regex_parser id: my-custom-parser regex: '^.*source=(?P<source>[^ ]*).*$' - type: copy from: attributes["source"] to: resource["com.splunk.source"]   If we then use them, the entry for our message will look like this:   { "timestamp": "2024-05-27T12:21:03.769505512Z", "body": "2024-05-27T12:14:05.227298808+00:00 stderr F Hello World source=xyz", "attributes": { "log": "Hello World source=xyz", "log.iostream": "stderr", "logtag": "F", "time": "2024-05-27T12:14:05.227298808+00:00" "source": "xyz", }, "resource": { "com.splunk.source": "xyz", "com.splunk.sourcetype": "kube:container:(container name)", "k8s.container.name": "(container name)", "k8s.container.restart_count": "0", "k8s.namespace.name": "default", "k8s.pod.name": "(pod name)", "k8s.pod.uid": "(pod uid)", }, "severity": 0, "scope_name": "" }   Notice the attribute source, which is parsed by the regex_parser that we just created. This value is then copied into a resource[“com.splunk.source”] by the copy operator.  Using custom operators with values.yaml So, we learned how to create custom operators. But where do we specify them in my_values.yaml to actually use them? Enter extraOperators! For the example discussed above, we will now update our configuration file with the following settings:   logsCollection: containers: extraOperators: - type: regex_parser id: my-custom-parser regex: '^.*source=(?P<source>[^ ]*).*$' - type: copy from: attributes["source"] to: resource["com.splunk.source"]   Now restart the helm deployment and you’re good to go!   helm upgrade --install my-splunk-otel-collector --values my_values.yaml splunk-otel-collector-chart/splunk-otel-collector   Some operators that you might find useful add - can be used to insert either a static value or an expression remove - removes a field, useful for cleaning up unnecessary data after other operations  move - moves (or renames) a field  json-parser - can be useful when you want to parse data saved in a JSON format recombine - combines multi-line logs into one, a topic that we covered extensively in previous blog posts And a lot more can be found here! And some troubleshooting tips So what if I’m not sure what my log entry looks like, I can’t possibly experiment with operators without that knowledge right? Correct! Before experimenting with operators, you should know the structure of your log entry, or else you might end up with faulty data or lots of annoying guesswork. And how would I know the structure of my log entry? You can use stdout operator:   logsCollection: containers: extraOperators: - type: stdout id: my-custom-stdout   Use the above config and restart the helm deployment. Now do kubectl get logs pod_name command and you’ll notice a bunch of logs containing JSON entries. That’s how your entry looks, and how you can debug your operators. Conclusion In this article, we’ve explored some ways of using operators to extract information from the logs. This very powerful feature can be used to parse logs in a more complex way not provided by a basic configuration.  On the other hand, it is important not to overcomplicate things - if you can extract data using built-in functions then do so. SOCK provides many ways to extract data for many commonly used data formats, and using them is much simpler.
My company had a Splunk 8.0 server that hadn't been upgraded in years.  There was a lot of abandoned testing on it over the years so cleanup and multiple upgrades to get to 9.2.1 was going to be a bi... See more...
My company had a Splunk 8.0 server that hadn't been upgraded in years.  There was a lot of abandoned testing on it over the years so cleanup and multiple upgrades to get to 9.2.1 was going to be a big undertaking.  I decided to stand up a new server with 9.2.1 and migrate over the data.  We went live on it a few weeks ago.  We've had no issues with ingesting data or searches or alerts.  However the Indexes page under Settings shows 0 on all indexes for Current Size and Event Count.  Earliest Event and Latest Event are all blank.  This is happening on all the indexes, both internal and non-internal.  We noticed this before go live and talked to support.  They said it was because of the trial license we were using and would go away when we put our real license on it during go live. We did the license switch during go live but we're still seeing 0 for everything.  We can search on these indexes so there is data in them.  I don't see any errors in the logs when we go to the indexes page. If I go to Indexes and Volumes: Instances in the Monitoring console under snapshots it shows my bucket count and space used on the file system but index usage is 0 for everything.  Under historical it does show the index sizes.  
Hi Guys,   Has anyone done a search were you can monitor the CPU on the Fortinet Firewalls? Its on the App but doesn't seem to work?   Cheers Ahmed
Hello,  Can splunk python sdk be used along with a summary index? How? I wish to schedule periodic querying and extracting the data from Splunk for which I usually used the SDK like this and it wor... See more...
Hello,  Can splunk python sdk be used along with a summary index? How? I wish to schedule periodic querying and extracting the data from Splunk for which I usually used the SDK like this and it works for 1 time run as I removed my "collect index ..." code from my query - service = client.connect( host=HOST,  port=PORT,  username=USERNAME,  password=PASSWORD) kwargs_oneshot = {"earliest_time": "-1h",   "latest_time": "now",   "output_mode": 'json', "count" : 100} searchquery_oneshot = "search <query>"  # if i want collected index results to be used below periodically i.e. every 1 hour, what change do I make in my code? oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) Thanks
I am unable to see any logs in splunk from my spring boot application. I am adding my xml property file, controller file, dependency file and splunk data input screenshots to help resolving the issue... See more...
I am unable to see any logs in splunk from my spring boot application. I am adding my xml property file, controller file, dependency file and splunk data input screenshots to help resolving the issue. I am breaking my head for past couple of days and unable to find what I am missing. HEC data input config UI HEC data input edit UI Global Settings The following API is logging events in my index: curl -k http://localhost:8088/services/collector/event -H "Authorization: Splunk ***" -d '{"event": "Hello, Splunk!"}' This is my log4j2-spring.xml: <?xml version="1.0" encoding="UTF-8"?> <Configuration> <Appenders> <Console name="console" target="SYSTEM_OUT"> <PatternLayout pattern="%style{%d{ISO8601}} %highlight{%-5level }[%style{%t}{bright,blue}] %style{%C{10}}{bright,yellow}: %msg%n%throwable" /> </Console> <SplunkHttp name="splunkhttp" url="http://localhost:8088" token="***" host="localhost" index="customer_api_dev" type="raw" source="http-event-logs" sourcetype="log4j" messageFormat="text" disableCertificateValidation="true"> <PatternLayout pattern="%m" /> </SplunkHttp> </Appenders> <Loggers> <!-- LOG everything at DEBUG level --> <Root level="debug"> <AppenderRef ref="console" /> <AppenderRef ref="splunkhttp" /> </Root> </Loggers> </Configuration> This is my controller: package com.example.advanceddbconcepts.controller; import com.example.advanceddbconcepts.entity.Customer; import com.example.advanceddbconcepts.entity.Order; import com.example.advanceddbconcepts.service.CustomerService; import lombok.Getter; import lombok.Setter; import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; import org.springframework.http.ResponseEntity; import org.springframework.web.bind.annotation.*; import java.util.List; @RestController @RequestMapping("/api/customers") public class CustomerController { Logger logger = LogManager.getLogger(CustomerController.class); private final CustomerService customerService; public CustomerController(CustomerService customerService) { this.customerService = customerService; } @PostMapping public ResponseEntity<Customer> createCustomerWithOrder(@RequestBody CustomerRequestOrder request) { Customer customer = new Customer(request.getCustomerName()); logger.info("Created a customer with name {}", request.getCustomerName()); List<Order> orders = request .getProductName() .stream() .map(Order::new) .toList(); Customer savedCustomer = customerService.createCustomerWithOrder(customer, orders); logger.info("API is successful"); return ResponseEntity.ok().body(savedCustomer); } @Getter @Setter public static class CustomerRequestOrder { private String customerName; private List<String> productName; } } I have added below dependencies in my pom.xml <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> <version>3.3.3</version> </dependency> <dependency> <groupId>com.splunk.logging</groupId> <artifactId>splunk-library-javalogging</artifactId> <version>1.11.8</version> </dependency> </dependencies> I am unable to see any logs in splunk after I hit the API. I am able to see logs in my local: 2024-09-02T19:37:00.629+05:30 INFO 24912 --- [nio-8080-exec-4] c.e.a.controller.CustomerController : Created a customer with name John Doe 2024-09-02T19:37:00.667+05:30 INFO 24912 --- [nio-8080-exec-4] c.e.a.controller.CustomerController : API is successful  
I try to use lookup to specify span option value in bin command with map   | inputlookup mylookupup.csv | fields Index, SearchString ,Tdiv | map [ search index="$Index$" _raw="*$SearchString$*" | b... See more...
I try to use lookup to specify span option value in bin command with map   | inputlookup mylookupup.csv | fields Index, SearchString ,Tdiv | map [ search index="$Index$" _raw="*$SearchString$*" | bin span="$Tdiv$" _time]   The previous request fails with  :  Error in 'bin' command: The value for option span (Tdiv) is invalid. When span is expressed using a sub-second unit (ds, cs, ms, us), the span value needs to be < 1 second, and 1 second must be evenly divisible by the span value. Example of values in Tdiv field : 15m 1h could you help me with this problem?
Hi folks.. I have an issue where I can't get an event to break right. The event looks like this   ************************************ 2024.09.03.141001 ************************************ s... See more...
Hi folks.. I have an issue where I can't get an event to break right. The event looks like this   ************************************ 2024.09.03.141001 ************************************ sqlplus -S -L swiftfilter/_REMOVED_@PPP @"long_lock_alert.sql" TAG COUNT(*) --------------- ---------- PPP_locks_count 0 TAG COUNT(*) --------------- ---------- PPP_locks_count 0 SUCCESS End Time: 2024.09.03.141006   Props looks like this:   [nk_pp_tasks] SHOULD_LINEMERGE=false LINE_BREAKER=End Time([^\*]+) NO_BINARY_CHECK=true TIME_FORMAT=%Y.%m.%d.%H%M%S TIME_PREFIX=^.+[\r\n]\s BREAK_ONLY_BEFORE_DATE = false   Outcome is this:   When the logfile is imported through 'Add Data' everything looks fine and the event has not been broken up in 3. Any idees on how to make Splunk not break up the event ?
HI , I want to extract purple part. But Severity can be Critical as well . [Time:29-08@17:52:05.880] [60569130] 17:52:28.604 10.82.10.245 local0.notice [S=2952486] [BID=d57afa:30] RAISE-ALARM:ac... See more...
HI , I want to extract purple part. But Severity can be Critical as well . [Time:29-08@17:52:05.880] [60569130] 17:52:28.604 10.82.10.245 local0.notice [S=2952486] [BID=d57afa:30] RAISE-ALARM:acBoardEthernetLinkAlarm: [KOREASBC1] Ethernet link alarm. LAN port number 3 is down.; Severity:minor; Source:Board#1/EthernetLink#3; Unique ID:206; Additional Info1:GigabitEthernet 4/3; Additional Info2:SEL-SBC01; [Time:29-08@17:52:28.604] [60569131] 17:52:28.605 10.82.10.245 local0.warning [S=2952487] [BID=d57afa:30] RAISE-ALARM:acEthernetGroupAlarm: [KOREASBC1] Ethernet Group alarm. Ethernet Group 2 is Down.; Severity:major; Source:Board#1/EthernetGroup#2; Unique ID:207; Additional Info1:; [Time:29-08@17:52:28.605] [60569132] 17:52:28.721 10.82.10.245 local0.notice [S=2952488] [BID=d57afa:30] SYS_HA: Redundant unit physical network interface error fixed. [Code:0x46000] [Time:29-08@17:52:28.721] [60569133]
hi i want to extract purple part. [Time:29-08@17:53:05.654] [60569222] 17:53:05.654 10.82.10.245 local3.notice [S=2952578] [SID=d57afa:30:1773434] (N 71121559) AcSIPDialog(#28)::TransactionFail - ... See more...
hi i want to extract purple part. [Time:29-08@17:53:05.654] [60569222] 17:53:05.654 10.82.10.245 local3.notice [S=2952578] [SID=d57afa:30:1773434] (N 71121559) AcSIPDialog(#28)::TransactionFail - ClientTransaction(#471) failed sending message with CSeq 1 OPTIONS CallID 20478380282982024175249@1.215.255.202, the cause is Transport Error [Time:29-08@17:53:05.654] [60569223] 17:53:05.655 10.82.10.245 local0.warning [S=2952579] [BID=d57afa:30] RAISE-ALARM:acProxyConnectionLost: [KOREASBC1] Proxy Set Alarm Proxy Set 1 (PS_ITSP): Proxy lost. looking for another proxy; Severity:major; Source:Board#1/ProxyConnection#1; Unique ID:208; Additional Info1:; [Time:29-08@17:53:05.655] [60569224] 17:53:05.656 10.82.10.245 local0.warning [S=2952580] [BID=d57afa:30] RAISE-ALARM:acIpGroupNoRouteAlarm: [KOREASBC1] IP Group is temporarily blocked. IP Group (IPG_ITSP) Blocked Reason: No Working Proxy; Severity:major; Source:Board#1/IPGroup#2; Unique ID:209; Additional Info1:; [Time:29-08@17:53:05.656] [60569225] 17:53:05.657 10.82.10.245 local3.notice [S=2952581] [SID=d57afa:30:1773434] (N 71121560) AcSIPDialog(#28): Handling DIALOG_DISCONNECT_REQ in state DialogInitiated
Hi guys, i'm in need to delete some records from the deployment server although, when i do it via the forwarder management i get the "This functionality has been deprecated" alert. Is there any other... See more...
Hi guys, i'm in need to delete some records from the deployment server although, when i do it via the forwarder management i get the "This functionality has been deprecated" alert. Is there any other way i can proceed?
Hello, oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) reader = results.JSONResultsReader(oneshotsearch_results) for item in reader:   if(isinstance(item, dict... See more...
Hello, oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) reader = results.JSONResultsReader(oneshotsearch_results) for item in reader:   if(isinstance(item, dict)):     for key in item:       if(key == '<...>'):         A = str(item[key])         print('A is :',A) The above code was working till yesterday. Now it does not enter the 1st for loop (i.e. for item in reader) anymore. I verified this by adding a print statement before the 1st if statement and it is not printing.